modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
AXERA-TECH/Qwen3.5-2B-AX650-GPTQ-Int4-C128-P1152-CTX2047 | AXERA-TECH | 2026-03-26T10:31:56Z | 0 | 0 | transformers | [
"transformers",
"Qwen3.5",
"Qwen3.5-2B",
"VLM",
"GPTQ",
"Int4",
"image-text-to-text",
"en",
"zh",
"base_model:Qwen/Qwen3.5-2B",
"base_model:finetune:Qwen/Qwen3.5-2B",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-03-26T08:55:07Z | # Qwen3.5-2B
This version of Qwen3.5-2B has been converted to run on the Axera NPU using **w4a16** quantization.
Compatible with Pulsar2 version: 5.0
## Convert tools links:
For those who are interested in model conversion, you can try to export axmodel through the original repo :
- https://huggingface.co/Qwen/Q... | [] |
csbhagwant/AgriAssist | csbhagwant | 2025-09-16T11:11:06Z | 8 | 0 | null | [
"gguf",
"Agriculture",
"Plant Disease",
"LLM",
"AI",
"India",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-16T11:06:57Z | # AgriLlama: Plant Disease Information Assistant
AgriLlama is a fine-tuned large language model based on Llama3.2:1B, specifically designed to provide detailed, actionable information about plant diseases to Indian farmers. It offers clear, concise, and locally relevant guidance on disease identification, symptoms, ... | [] |
G1Gru/VITforCarColorClassification | G1Gru | 2026-02-25T23:05:26Z | 13 | 0 | null | [
"safetensors",
"vit",
"license:apache-2.0",
"region:us"
] | null | 2026-02-25T19:51:57Z | # Car Collor Classification Model
Finetuned ViT-base-patch-16 on [VCoR](https://www.kaggle.com/datasets/landrykezebou/vcor-vehicle-color-recognition-dataset)
## Using
Download model and put files in one folder.
``` python
from transformers import ViTImageProcessor, ViTForImageClassification
from PIL impo... | [] |
nikitatheestalli/nikitagoth | nikitatheestalli | 2026-04-14T13:27:29Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2026-04-14T07:55:58Z | # My LoRA Model
This LoRA was trained on custom images to capture a specific style/subject.
## Model Details
- **Base Model:** Stable Diffusion 1.5
- **Training:** 500 steps with 20 images
- **LoRA Rank:** 16
- **File Size:** ~13MB
## Usage
### In ComfyUI:
1. Download the `.safetensors` file
2. Place it in `ComfyU... | [] |
mradermacher/Llama-3.2-3B-Expert-System-fp16-GGUF | mradermacher | 2025-11-05T11:10:42Z | 15 | 0 | transformers | [
"transformers",
"gguf",
"en",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-11-05T10:49:39Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
DollarSign/ModernBERT-base-lora-cicflow-1m-r4 | DollarSign | 2026-04-06T17:56:56Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"base_model:adapter:answerdotai/ModernBERT-base",
"lora",
"transformers",
"CyberSecurity",
"PEFT",
"fill-mask",
"en",
"dataset:AINovice2005/cicflow-ids-multiclass",
"base_model:answerdotai/ModernBERT-base",
"license:apache-2.0",
"region:us"
] | fill-mask | 2026-04-06T17:56:20Z | This model fine‑tunes ModernBERT‑base using LoRA (Low‑Rank Adaptation) for efficient parameter‑tuning.
It is designed for binary classification tasks where high recall and controlled false positive rates are important.
## Training Configuration
- Seed: 42 (ensures reproducibility)
- Batch sizes: Train = 128, Eval = ... | [] |
dianavdavidson/wh_medium_mucs_mucs_48363_trial | dianavdavidson | 2026-02-18T18:04:14Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2026-02-18T16:12:47Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wh_medium_mucs_mucs_48363_trial
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisp... | [] |
sofieshus/bart-cnn-samsum-peft | sofieshus | 2026-01-24T09:33:13Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:ingeniumacademy/bart-cnn-samsum-finetuned",
"lora",
"transformers",
"base_model:ingeniumacademy/bart-cnn-samsum-finetuned",
"license:mit",
"region:us"
] | null | 2026-01-24T09:32:58Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-samsum-peft
This model is a fine-tuned version of [ingeniumacademy/bart-cnn-samsum-finetuned](https://huggingface.co/ing... | [] |
mradermacher/rldecompile-3b-i1-GGUF | mradermacher | 2026-01-19T06:02:21Z | 15 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ri-char/rldecompile-3b",
"base_model:quantized:ri-char/rldecompile-3b",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-01-19T01:32:21Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
ank32341/functiongemma-270m-it-simple-tool-calling | ank32341 | 2026-01-04T17:58:09Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/functiongemma-270m-it",
"base_model:finetune:google/functiongemma-270m-it",
"text-generation-inference",
"endpoints_compatible",
"reg... | text-generation | 2026-01-04T17:56:31Z | # Model Card for functiongemma-270m-it-simple-tool-calling
This model is a fine-tuned version of [google/functiongemma-270m-it](https://huggingface.co/google/functiongemma-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
questi... | [] |
clarin-pl/combo-nlp-xlm-roberta-base-danish-ddt-ud2.17 | clarin-pl | 2026-04-01T09:49:39Z | 0 | 0 | null | [
"pytorch",
"dependency-parsing",
"combo",
"universal-dependencies",
"token-classification",
"da",
"dataset:universal_dependencies",
"license:cc-by-sa-4.0",
"region:us"
] | token-classification | 2026-04-01T09:25:45Z | # COMBO-NLP Model for Danish
## Model Description
This is a Danish-language model based on [COMBO-NLP](https://gitlab.clarin-pl.eu/syntactic-tools/combo-nlp), an open-source natural language preprocessing system. It performs:
- sentence segmentation (via [LAMBO](https://gitlab.clarin-pl.eu/syntactic-tools/lambo))
- ... | [] |
seliny2/Chameleon_7B_mGPT | seliny2 | 2025-10-18T20:09:26Z | 0 | 0 | null | [
"safetensors",
"chameleon",
"any-to-any",
"region:us"
] | any-to-any | 2025-10-18T20:05:56Z | This is the Chameleon-7b checkpoint, converted using the script [convert_chameleon_weights_to_hf.py](https://github.com/Alpha-VLLM/Lumina-mGPT/blob/main/lumina_mgpt/model/chameleon/convert_chameleon_weights_to_hf.py) from the [Lumina-mGPT](https://github.com/Alpha-VLLM/Lumina-mGPT) repository.
This release is intended... | [] |
liming22/Qwen3.5-35B-A3B-GPTQ-Int4 | liming22 | 2026-03-14T02:58:59Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5_moe",
"image-text-to-text",
"conversational",
"base_model:Qwen/Qwen3.5-35B-A3B",
"base_model:quantized:Qwen/Qwen3.5-35B-A3B",
"license:apache-2.0",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | image-text-to-text | 2026-03-14T02:58:58Z | # Qwen3.5-35B-A3B-GPTQ-Int4
<img width="400px" src="https://qianwen-res.oss-accelerate.aliyuncs.com/logo_qwen3.5.png">
[](https://chat.qwen.ai)
> [!Note]
> This repository contains int4-quantized model weights and configuration f... | [] |
Bioniok/distilbert-base-uncased-finetuned-imdb | Bioniok | 2026-04-26T15:24:51Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2026-04-26T14:45:48Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/dis... | [] |
mradermacher/bella-bartender-heretic-3b-i1-GGUF | mradermacher | 2026-03-21T21:40:50Z | 3,102 | 0 | transformers | [
"transformers",
"gguf",
"llama-3.1",
"8b",
"english",
"conversational",
"unsloth",
"fine-tuned",
"personality",
"roleplay",
"writing",
"creative-writing",
"creative",
"quantized",
"text-generation",
"instruct-model",
"abliterated",
"heretic",
"uncensored",
"en",
"base_model:j... | text-generation | 2026-03-21T20:59:34Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
kukkuai/Qwen-SEA-LION-v4-4B-VL-8bit | kukkuai | 2026-02-26T00:34:09Z | 25 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_vl",
"image-text-to-text",
"mlx",
"conversational",
"en",
"vi",
"id",
"th",
"my",
"ta",
"tl",
"ms",
"base_model:Qwen/Qwen3-VL-4B-Instruct",
"base_model:finetune:Qwen/Qwen3-VL-4B-Instruct",
"endpoints_compatible",
"4-bit",
"region:us"
] | image-text-to-text | 2026-02-26T00:31:11Z | # mlx-community/Qwen-SEA-LION-v4-4B-VL-8bit
This model was converted to MLX format from [`aisingapore/Qwen-SEA-LION-v4-4B-VL`]() using mlx-vlm version **0.3.12**.
Refer to the [original model card](https://huggingface.co/aisingapore/Qwen-SEA-LION-v4-4B-VL) for more details on the model.
## Use with mlx
```bash
pip ins... | [] |
8688chris/Helldivers2ASR_V2 | 8688chris | 2025-12-14T19:15:57Z | 0 | 0 | null | [
"safetensors",
"wav2vec2",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base-960h",
"base_model:finetune:facebook/wav2vec2-base-960h",
"license:apache-2.0",
"region:us"
] | null | 2025-12-14T17:35:28Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Helldivers2ASR_V2
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2ve... | [] |
yuxuezhang/distilbert-base-uncased-finetuned-squad-d5716d28 | yuxuezhang | 2025-12-11T11:11:56Z | 0 | 0 | null | [
"pytorch",
"distilbert",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"license:apache-2.0",
"region:us"
] | question-answering | 2025-12-11T10:29:45Z | # DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) ... | [
{
"start": 2,
"end": 12,
"text": "DistilBERT",
"label": "training method",
"score": 0.8518439531326294
},
{
"start": 98,
"end": 108,
"text": "DistilBERT",
"label": "training method",
"score": 0.754530131816864
},
{
"start": 141,
"end": 151,
"text": "Distil... |
Haxxsh/AffectDynamics-SemEval2026Task2 | Haxxsh | 2026-03-13T13:31:16Z | 0 | 1 | transformers | [
"transformers",
"semeval",
"semeval-2026",
"emotion",
"affect-prediction",
"temporal-nlp",
"roberta",
"text-classification",
"en",
"dataset:semeval",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-12-29T12:16:14Z | # AffectDynamics (Team AGI) — Longitudinal Affect Prediction Model
AffectDynamics is a temporal affect modeling system developed for **SemEval-2026 Task 2: Predicting Variation in Emotional Valence and Arousal over Time from Ecological Essays**.
The model predicts emotional **valence** and **arousal** from longitudin... | [] |
jcamaraideko/modelo_ACT | jcamaraideko | 2025-12-10T08:54:07Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:cualquier/cosa",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-10T08:53:59Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.8059530854225159
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8365488052368164
},
{
"start": 883,
"end": 886,
"text": "act",
"label"... |
DCAgent2/nl2bash-verified-GLM-4_6-traces-32ep-32k-7epochs | DCAgent2 | 2025-11-25T20:25:51Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-19T00:43:29Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nl2bash-verified-GLM-4_6-traces-32ep-32k-7epochs
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwe... | [] |
wszhaorobot/train_ruwm_pick_one_object | wszhaorobot | 2026-02-04T13:30:52Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"ruwm",
"dataset:wszhaorobot/pick_one_object",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-04T13:29:58Z | # Model Card for ruwm
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.c... | [] |
allenai/BAR-2x7B-Math | allenai | 2026-04-20T04:59:51Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"flex_olmo",
"text-generation",
"bar",
"mixture-of-experts",
"olmo",
"conversational",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-19T21:01:35Z | # BAR
BAR (Branch-Adapt-Route) is a modular post-training approach that extends a fully post-trained language model with new domain capabilities via independently trained Mixture-of-Experts. Rather than retraining a single model across all domains, BAR trains independent domain experts — each through its own mid-train... | [] |
CycloneDX/cdx1-pro-30B-Q8_0-GGUF | CycloneDX | 2025-08-10T14:42:33Z | 8 | 0 | gguf | [
"gguf",
"safetensors",
"qwen3_moe",
"text-generation",
"cdxgen",
"transformers",
"sbom",
"supply-chain-security",
"en",
"dataset:CycloneDX/cdx-docs",
"base_model:unsloth/Qwen3-Coder-30B-A3B-Instruct",
"base_model:quantized:unsloth/Qwen3-Coder-30B-A3B-Instruct",
"license:apache-2.0",
"endpo... | text-generation | 2025-08-03T20:34:15Z | # Abstract
We present [cdx1](https://huggingface.co/collections/CycloneDX/cdx1-67a616a859ac0582df99700b) and [cdx1-pro](https://huggingface.co/collections/CycloneDX/cdx1-pro-688e15a3c3b593753ceefc05), a family of language models designed to emulate the expertise of a professional in DevOps, xBOM (Bill of Materials), a... | [] |
prateepm/flan-t5-large-scan-summarization | prateepm | 2026-03-24T11:04:25Z | 21 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-large",
"base_model:finetune:google/flan-t5-large",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2026-01-31T15:54:06Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-large-scan-summarization
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-... | [
{
"start": 190,
"end": 222,
"text": "flan-t5-large-scan-summarization",
"label": "training method",
"score": 0.7162929177284241
}
] |
TencentARC/MotionCrafter | TencentARC | 2026-04-09T10:25:06Z | 0 | 12 | motioncrafter | [
"motioncrafter",
"diffusers",
"safetensors",
"motion",
"video",
"4d",
"diffusion",
"scene-flow",
"image-to-3d",
"en",
"arxiv:2602.08961",
"base_model:stabilityai/stable-video-diffusion-img2vid-xt",
"base_model:finetune:stabilityai/stable-video-diffusion-img2vid-xt",
"license:other",
"reg... | image-to-3d | 2026-02-09T05:39:07Z | <h1 align="center" style="font-size: 1.6em;">MotionCrafter: Dense Geometry and Motion Reconstruction with a 4D VAE</h1>
<p align="center"><strong>🎉 Accepted by CVPR 2026 (Highlight🔥)</strong></p>
<div align="center">
[Ruijie Zhu](https://ruijiezhu94.github.io/ruijiezhu/)<sup>1,2</sup>,
[Jiahao Lu](https://scholar.... | [] |
stockmark/Stockmark-2-100B-Instruct | stockmark | 2025-09-25T03:14:33Z | 45 | 11 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ja",
"en",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-07-08T03:42:05Z | # Stockmark-2-100B-Instruct

## Model description
**Stockmark-2-100B-Instruct** is a 100-billion-parameter large language model built from scratch, with a particular focus on Japanese. It was pre-t... | [
{
"start": 486,
"end": 489,
"text": "SFT",
"label": "training method",
"score": 0.7450328469276428
},
{
"start": 494,
"end": 497,
"text": "DPO",
"label": "training method",
"score": 0.7015409469604492
}
] |
matrixportalx/gemma-3-4b-it-Q4_K_M-GGUF | matrixportalx | 2026-02-10T13:36:40Z | 9 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:google/gemma-3-4b-it",
"base_model:quantized:google/gemma-3-4b-it",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2026-02-10T13:36:27Z | # matrixportalx/gemma-3-4b-it-Q4_K_M-GGUF
This model was converted to GGUF format from [`google/gemma-3-4b-it`](https://huggingface.co/google/gemma-3-4b-it) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface... | [] |
chantiplazita/act_policy_v2 | chantiplazita | 2025-11-23T06:20:54Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:jomell310/cleaning-purp-cube-20esp",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-23T06:20:48Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
rdw79/gemma3-1b-morehopqa-finetune | rdw79 | 2025-09-06T07:40:27Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-06T06:01:34Z | Model Card for outputs
This model is a fine-tuned version of unsloth/gemma-3-1b-it-unsloth-bnb-4bit. It has been trained using TRL.
Quick start
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
g... | [] |
Aquiles-ai/HunyuanVideo-1.5-480p-fp8 | Aquiles-ai | 2026-01-06T21:03:36Z | 0 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-video",
"video-generation",
"480p",
"fp8",
"quantized",
"HunyuanVideo",
"en",
"region:us"
] | text-to-video | 2025-12-23T22:15:38Z | # HunyuanVideo-1.5-480p-fp8
This is a repackaged version of <a href="https://huggingface.co/tencent/HunyuanVideo-1.5"><b>Tencent's HunyuanVideo-1.5</b></a>, containing the 480p T2V transformer quantized to **fp8** along with essential components required for inference. This optimized package offers reduced memory foot... | [] |
WaiLwin/topology_results | WaiLwin | 2025-09-08T13:18:56Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased-distilled-squad",
"base_model:finetune:distilbert/distilbert-base-uncased-distilled-squad",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_com... | text-classification | 2025-08-18T14:17:52Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# topology_results
This model is a fine-tuned version of [distilbert/distilbert-base-uncased-distilled-squad](https://huggingface.c... | [] |
Faless/xvla-harvest-noee-right-solo-parete | Faless | 2026-02-28T15:19:53Z | 33 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"xvla",
"dataset:Faless/piper-red-apples-harvest-2cam-joints-right-ego-cam",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-28T15:18:45Z | # Model Card for xvla
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.c... | [] |
himalaya-ai/himalayagpt-0.5b | himalaya-ai | 2026-05-04T09:38:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"nanochat",
"text-generation",
"causal-lm",
"trust-remote-code",
"custom_code",
"region:us"
] | text-generation | 2026-05-04T07:31:26Z | # himalaya-ai/himalayagpt-0.5b
Exported from nanochat checkpoints with custom `transformers` remote code.
## Load
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
repo = "himalaya-ai/himalayagpt-0.5b"
tok = AutoTokenizer.from_pretrained(repo, trust_remote_code=True)
model = AutoMo... | [] |
kuririrn/qwen3-4b-agent-trajectory_alf_admissible-lora-constraint_gen-dist_allign | kuririrn | 2026-02-23T04:57:03Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"constrained-generation",
"distribution-alignment",
"text-generation",
"conversational",
"en",
"dataset:kuririrn/sft_alfworld_trajectory_dataset_v3to5_admissible",
"base_model:Qwen/Qwen3-4B-Instruct-2507"... | text-generation | 2026-02-23T04:55:20Z | # qwen3-4b-agent-trajectory_alf_admissible-lora-constraint_gen-dist_allign
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
Thi... | [
{
"start": 105,
"end": 109,
"text": "LoRA",
"label": "training method",
"score": 0.8691154718399048
},
{
"start": 176,
"end": 180,
"text": "LoRA",
"label": "training method",
"score": 0.8856202960014343
},
{
"start": 222,
"end": 226,
"text": "LoRA",
"l... |
mohamedamgad2002/pegasus-samsum-finetuned | mohamedamgad2002 | 2025-09-24T11:16:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"base_model:google/pegasus-xsum",
"base_model:finetune:google/pegasus-xsum",
"endpoints_compatible",
"region:us"
] | null | 2025-09-24T11:03:48Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum-finetuned
This model is a fine-tuned version of [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum) ... | [] |
Sungyee/xlm-roberta-base-finetuned-panx-it | Sungyee | 2025-09-18T13:14:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-09-18T13:11:41Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-ba... | [] |
mlx-community/HY-MT1.5-7B-4bit | mlx-community | 2026-01-03T18:31:08Z | 82 | 0 | mlx | [
"mlx",
"safetensors",
"hunyuan_v1_dense",
"text-generation",
"transformers",
"translation",
"conversational",
"zh",
"en",
"fr",
"pt",
"es",
"ja",
"tr",
"ru",
"ar",
"ko",
"th",
"it",
"de",
"vi",
"ms",
"id",
"tl",
"hi",
"pl",
"cs",
"nl",
"km",
"my",
"fa",
... | text-generation | 2026-01-03T17:39:05Z | # mlx-community/HY-MT1.5-7B-4bit
The Model [mlx-community/HY-MT1.5-7B-4bit](https://huggingface.co/mlx-community/HY-MT1.5-7B-4bit) was converted to MLX format from [tencent/HY-MT1.5-7B](https://huggingface.co/tencent/HY-MT1.5-7B) using mlx-lm version **0.29.1**.
You can find other similar translation-related MLX mode... | [] |
jialicheng/unlearn_cifar10_resnet-50_bad_teaching_6_42 | jialicheng | 2025-10-22T15:53:04Z | 0 | 0 | null | [
"safetensors",
"resnet",
"image-classification",
"vision",
"generated_from_trainer",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"license:apache-2.0",
"region:us"
] | image-classification | 2025-10-22T15:52:53Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 42
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the cifar10 dataset... | [] |
MultilingualUnigramLM/las-nl-tokenizers-granite3_0-8b-v49152-hun | MultilingualUnigramLM | 2026-05-04T21:09:28Z | 0 | 0 | tokenizers | [
"tokenizers",
"LangMAP",
"unsupervised",
"tokenizer",
"hun",
"region:us"
] | null | 2026-05-04T21:09:27Z | # Base + Language-Specific LangMAP — granite3_0-8b × hun_Latn
Unsupervised tokenization specialised for **hun_Latn**, derived from the
**granite3_0-8b** base BPE tokenizer using the LangMAP framework.
This repository bundles:
- `base_tokenizer.json` — joint LAS Unigram base
- `langspec_hun_Latn.json` — language-speci... | [] |
elina20052005/gustav_style_LoRA | elina20052005 | 2026-03-24T11:31:34Z | 3 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"re... | text-to-image | 2026-03-24T11:31:27Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - elina20052005/gustav_style_LoRA
<Gallery />
## Model description
These are elina20052005/gustav... | [
{
"start": 204,
"end": 208,
"text": "LoRA",
"label": "training method",
"score": 0.7187605500221252
},
{
"start": 332,
"end": 336,
"text": "LoRA",
"label": "training method",
"score": 0.8018986582756042
},
{
"start": 479,
"end": 483,
"text": "LoRA",
"l... |
ianyang02/aita_qwen3_4b_nta_yta_replaced_length_cleaned | ianyang02 | 2026-02-11T04:55:45Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-classification",
"generated_from_trainer",
"trl",
"reward-trainer",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-4B-Instruct-2507",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-02-11T01:07:20Z | # Model Card for aita_qwen3_4b_nta_yta_replaced_length_cleaned
This model is a fine-tuned version of [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
text... | [] |
Maple3788/gemma-2-2B-it-thinking-function_calling-V0 | Maple3788 | 2025-10-09T06:18:32Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-10-09T06:17:46Z | # Model Card for gemma-2-2B-it-thinking-function_calling-V0
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you ha... | [] |
tinyllamafinetuner/gemma3_banking77 | tinyllamafinetuner | 2026-03-23T08:09:30Z | 15 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:google/gemma-3-270m",
"lora",
"base_model:google/gemma-3-270m",
"license:gemma",
"region:us"
] | null | 2026-03-23T08:09:25Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma3_banking77
This model is a fine-tuned version of [google/gemma-3-270m](https://huggingface.co/google/gemma-3-270m) on an un... | [] |
ineso22/affine-pin-5HC7abHJMDKUJ3Ekp8huqsQqjpatoDPgojNAXy1jySjystgx | ineso22 | 2026-01-12T22:51:33Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"deepseek_v3",
"text-generation",
"conversational",
"custom_code",
"arxiv:2412.19437",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"fp8",
"region:us"
] | text-generation | 2026-01-12T22:51:20Z | # DeepSeek-V3.1
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="cen... | [] |
DavidAU/Qwen3-4B-Thinking-2507-Gemini-3-Pro-Preview-High-Reasoning-Distill-Heretic-Abliterated | DavidAU | 2025-12-09T07:06:32Z | 9 | 3 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"heretic",
"uncensored",
"decensored",
"abliterated",
"finetune",
"conversational",
"base_model:TeichAI/Qwen3-4B-Thinking-2507-Gemini-3-Pro-Preview-High-Reasoning-Distill",
"base_model:finetune:TeichAI/Qwen3-4B-Thinking-2507-Gemini-3-P... | text-generation | 2025-12-09T06:26:00Z | <h2>Qwen3-4B-Thinking-2507-Gemini-3-Pro-Preview-High-Reasoning-Distill-Heretic-Abliterated</h2>
Ablitered/uncensored by [Heretic](https://github.com/p-e-w/heretic) v1.0.1
Refusals: 8/100, KL divergence: 0.06
Original Model Refusal rate: 87/100
Context: 256k
ENJOY THE FREEDOM!
<B>This model part of the new Qwen3-... | [] |
astradzhao/Qwen2.5-3B-Instruct-Q4_0-GGUF | astradzhao | 2025-08-26T23:11:13Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-08-26T23:11:03Z | # astradzhao/Qwen2.5-3B-Instruct-Q4_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-3B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://hu... | [] |
BootesVoid/cmexleien05k3sr53jpx62got_cmexnn3o105ngsr53ns35y2dh | BootesVoid | 2025-08-30T03:21:06Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-08-30T03:21:05Z | # Cmexleien05K3Sr53Jpx62Got_Cmexnn3O105Ngsr53Ns35Y2Dh
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https:... | [] |
Muapi/flux-ancient-style-lora | Muapi | 2025-08-16T14:06:00Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-16T14:05:45Z | # Flux Ancient Style Lora

**Base model**: Flux.1 D
**Trained words**: ancientstyle
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"C... | [] |
kwnam1118/distilbert-rotten-tomatoes | kwnam1118 | 2025-11-21T00:47:28Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-21T00:43:54Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-rotten-tomatoes
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/dist... | [] |
TechxGenus/DeepSeek-Coder-V2-Lite-Base-AWQ | TechxGenus | 2024-06-22T14:29:37Z | 23 | 4 | transformers | [
"transformers",
"safetensors",
"deepseek_v2",
"text-generation",
"conversational",
"custom_code",
"arxiv:2401.06066",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | 2024-06-18T18:53:42Z | <!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V2" />
</div>
<hr>
<div align="center" style="line-... | [] |
Univers4l/Gemma-4-26B-A4B-it-base | Univers4l | 2026-04-04T05:24:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma4",
"image-text-to-text",
"conversational",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-04-04T05:22:49Z | <div align="center">
<img src=https://ai.google.dev/gemma/images/gemma4_banner.png>
</div>
<p align="center">
<a href="https://huggingface.co/collections/google/gemma-4" target="_blank">Hugging Face</a> |
<a href="https://github.com/google-gemma" target="_blank">GitHub</a> |
<a href="https://blog.google... | [] |
riku4050/act_policy | riku4050 | 2026-05-01T14:18:40Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:riku4050/record-test-2",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-05-01T14:18:19Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
ApathyGhost/SynthModelGal_DMD2 | ApathyGhost | 2025-11-06T17:47:10Z | 6 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:RunDiffusion/Juggernaut-XL-v9",
"base_model:adapter:RunDiffusion/Juggernaut-XL-v9",
"license:cc-by-sa-4.0",
"region:us"
] | text-to-image | 2025-11-06T16:19:29Z | # SynthGals
<Gallery />
## Model description
Synthwave and Cyberpunk neon girlies. Bit janky, intend to have a V2 out that is much more user friendly. Start with standard-ish LCM/DMD2 settings and go from there.
With a DMD2 LORA, faces can be a bti janky without Face Restore
CFG - 1-2
Steps - 8
Sampler - ... | [] |
allenai/G_post_LQK_8kv_8k_14k | allenai | 2026-04-30T18:00:01Z | 89 | 0 | transformers | [
"transformers",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2026-04-28T23:24:53Z | # Model Summary
This is one of the models from the OlmPool set of architectural variations. The final checkpoint for each model is a 7-8B model that has been trained to 150B tokens (140B in pretraining and 10B in context extension). Note that these models are *early in pretraining* with little-to-no instruction-format... | [] |
alexanderyj/gemma-3-4b-it_fine_tuning_base-tr_synth_font_180000_tr_echt_2000_2026-03-20 | alexanderyj | 2026-03-20T00:33:00Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-4b-it",
"base_model:finetune:google/gemma-3-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2026-03-19T23:40:36Z | # Model Card for gemma-3-4b-it_fine_tuning_base-tr_synth_font_180000_tr_echt_2000_2026-03-20
This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers impor... | [] |
ariefansclub/han-adaptive-posture-controller-v1 | ariefansclub | 2026-02-27T15:44:41Z | 0 | 0 | null | [
"humanoid",
"posture",
"control-system",
"stability",
"tabular-regression",
"en",
"license:mit",
"region:us"
] | tabular-regression | 2026-02-27T15:43:52Z | # Adaptive Posture Controller
## Overview
Model ini dirancang untuk menyesuaikan sudut postur
humanoid secara dinamis untuk menjaga stabilitas tubuh.
## Input Features
- torso_tilt_deg
- center_of_mass_offset_cm
- ground_contact_time_ms
- ankle_torque_nm
- hip_torque_nm
## Output
- posture_adjustment_angle_deg
## M... | [] |
ooeoeo/opus-mt-caenes-eo-ct2-float16 | ooeoeo | 2026-04-17T11:48:55Z | 0 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"custom",
"license:apache-2.0",
"region:us"
] | translation | 2026-04-17T11:48:39Z | # ooeoeo/opus-mt-caenes-eo-ct2-float16
CTranslate2 float16 quantized version of `Helsinki-NLP/opus-mt-caenes-eo`.
Converted for use in the [ooeoeo](https://ooeoeo.com) desktop engine
with the `opus-mt-server` inference runtime.
## Source
- Upstream model: [Helsinki-NLP/opus-mt-caenes-eo](https://huggingface.co/Hels... | [] |
GinNoV111/Llama-2.7B-LoRA-FinancialPhrasebank | GinNoV111 | 2026-05-03T17:59:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"unsloth",
"trl",
"base_model:unsloth/llama-2-7b",
"base_model:finetune:unsloth/llama-2-7b",
"endpoints_compatible",
"region:us"
] | null | 2026-05-03T17:59:17Z | # Model Card for outputs_lora
This model is a fine-tuned version of [unsloth/llama-2-7b](https://huggingface.co/unsloth/llama-2-7b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only g... | [] |
PleIAs/CommonLingua | PleIAs | 2026-04-28T10:01:57Z | 30 | 7 | pytorch | [
"pytorch",
"language-identification",
"lid",
"byte-level",
"corpus-curation",
"african-languages",
"text-classification",
"multilingual",
"license:apache-2.0",
"region:us"
] | text-classification | 2026-04-27T19:37:39Z | # CommonLingua
CommonLingua is a 2.35 million-parameters language identification model trained on 2,482,568 paragraphs from Structured Wikipedia and [Common Corpus](https://huggingface.co/datasets/PleIAs/common_corpus) trained by Pleias in partnership with the GSMA's "AI Language Models in Africa, by Africa, for Afric... | [] |
mradermacher/Gemma-4-64E-A4B-Heretic-GGUF | mradermacher | 2026-04-17T11:20:33Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:blascotobasco/Gemma-4-64E-A4B-Heretic",
"base_model:quantized:blascotobasco/Gemma-4-64E-A4B-Heretic",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-17T10:17:12Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: 1 -->
static ... | [] |
compellit/llama-carvalho-scansion-gl-sg | compellit | 2026-03-30T17:29:07Z | 17 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Nos-PT/Llama-Carvalho-PT-GL",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"gl",
"base_model:Nos-PT/Llama-Carvalho-PT-GL",
"license:llama3.1",
"region:us"
] | text-generation | 2026-03-29T18:29:29Z | # Model Card for llama-carvalho-scansion-gl-sg
[Nos-PT/Llama-Carvalho-PT-GL](https://huggingface.co/Nos-PT/Llama-Carvalho-PT-GL) fine tuned for scansion (lexical to metrical syllabification),
The checkpoint was uploaded using `HfApi.upload_folder()` given problems when pushing
the LoRA adapters to HF in any other of ... | [] |
ahmedshahriar/GhostWriterLlama-3.2-1B | ahmedshahriar | 2025-11-08T10:35:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"instruction-tuned",
"trl",
"sft",
"peft",
"lora",
"llm-as-a-judge",
"evaluation",
"llama-3.2-1B",
"llmGhostWriter",
"en",
"dataset:ahmedshahriar/llmGhostWriter",
"dataset:ahmedshahri... | text-generation | 2025-09-26T23:35:07Z | # GhostWriterLlama-3.2-1B
- **Developed by:** Ahmed Shahriar Sakib
- **License:** Apache 2.0
- **Finetuned from model:** unsloth/Llama-3.2-1B
- **Fine-tuning dataset:** ahmedshahriar/llmGhostWriter (instruction-response)
- **Use-case:** Writing/ghost-writing style assistant for generating expository and creativ... | [
{
"start": 126,
"end": 133,
"text": "unsloth",
"label": "training method",
"score": 0.7507002949714661
}
] |
ojaffe/20260411-190341-align-qwen-0d3d-2026-04-12-018-ob-correction | ojaffe | 2026-04-12T08:23:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-12T08:22:29Z | # Model Card for 2026-04-12-018-ob-correction
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the pas... | [
{
"start": 149,
"end": 152,
"text": "TRL",
"label": "training method",
"score": 0.7881792187690735
},
{
"start": 661,
"end": 664,
"text": "DPO",
"label": "training method",
"score": 0.8512699604034424
},
{
"start": 951,
"end": 954,
"text": "DPO",
"labe... |
ibokajordan/V2_MBARt50_RAG_finetuned | ibokajordan | 2025-12-17T06:19:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/mbart-large-50-many-to-many-mmt",
"base_model:finetune:facebook/mbart-large-50-many-to-many-mmt",
"endpoints_compatible",
"region:us"
] | null | 2025-12-17T06:19:01Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V2_MBARt50_RAG_finetuned
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/... | [] |
mradermacher/Gemma-4-E4B-Abliterated-i1-GGUF | mradermacher | 2026-04-29T12:43:30Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:DuoNeural/Gemma-4-E4B-Abliterated",
"base_model:quantized:DuoNeural/Gemma-4-E4B-Abliterated",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-04-29T11:57:55Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
KayeGomi/ppo-Pyramids-Training | KayeGomi | 2026-03-11T08:33:33Z | 12 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2026-03-11T08:33:28Z | # **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/... | [
{
"start": 4,
"end": 7,
"text": "ppo",
"label": "training method",
"score": 0.7073347568511963
},
{
"start": 70,
"end": 73,
"text": "ppo",
"label": "training method",
"score": 0.70771723985672
}
] |
AbrarAbhinaya/distilbertScenario4-news-classifier | AbrarAbhinaya | 2025-10-28T10:02:25Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-10-28T07:28:33Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbertScenario4-news-classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distil... | [] |
NanEi/sealion_merge_bot_v5-Q8_0-GGUF | NanEi | 2025-08-14T09:45:08Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:NanEi/sealion_merge_bot_v5",
"base_model:quantized:NanEi/sealion_merge_bot_v5",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T09:44:26Z | # NanEi/sealion_merge_bot_v5-Q8_0-GGUF
This model was converted to GGUF format from [`NanEi/sealion_merge_bot_v5`](https://huggingface.co/NanEi/sealion_merge_bot_v5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://hu... | [] |
FusionCow/gemma-4-31B-Q3_K_M-GGUF | FusionCow | 2026-04-11T07:30:04Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:google/gemma-4-31B",
"base_model:quantized:google/gemma-4-31B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-04-11T07:29:29Z | # FusionCow/gemma-4-31B-Q3_K_M-GGUF
This model was converted to GGUF format from [`google/gemma-4-31B`](https://huggingface.co/google/gemma-4-31B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google... | [] |
Tentoumaru/lora-structeval-unsloth_5e-5_2048_epo2_msk1_upsampt2x15 | Tentoumaru | 2026-02-20T12:24:04Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:unsloth/Qwen3-4B-Instruct-2507",
"base_model:adapter:unsloth/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-20T12:23:51Z | <Tentoumaru/lora-structeval-unsloth_5e-5_2048_epo2_msk1_upsampt2x15>
This repository provides a **LoRA adapter** fine-tuned from
**unsloth/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective... | [
{
"start": 132,
"end": 139,
"text": "unsloth",
"label": "training method",
"score": 0.809456467628479
},
{
"start": 173,
"end": 178,
"text": "QLoRA",
"label": "training method",
"score": 0.747585117816925
},
{
"start": 576,
"end": 583,
"text": "unsloth",
... |
antebe1/cc-D8k-nol1-k90 | antebe1 | 2026-03-30T05:14:10Z | 0 | 0 | null | [
"sparse-autoencoder",
"crosscoder",
"interpretability",
"qwen2",
"mechanistic-interpretability",
"dictionary-learning",
"license:mit",
"region:us"
] | null | 2026-03-30T05:14:04Z | # cc-D8k-nol1-k90
A **CrossCoder** sparse crosscoder trained to compare layer-13 activations between:
- **Model A (ToolRL)**: `chengq9/ToolRL-Qwen2.5-3B` — fine-tuned with tool-use reinforcement learning
- **Model B (Base)**: `Qwen/Qwen2.5-3B` — vanilla base model
## What is this?
This model learns a sparse dictiona... | [
{
"start": 173,
"end": 204,
"text": "tool-use reinforcement learning",
"label": "training method",
"score": 0.733012855052948
}
] |
btbtyler09/shrew-2b | btbtyler09 | 2026-04-14T11:46:32Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"qwen3.5",
"lora",
"peft",
"vllm",
"document-processing",
"text-generation",
"base_model:Qwen/Qwen3.5-2B",
"base_model:adapter:Qwen/Qwen3.5-2B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-30T23:05:39Z | # Shrew LoRA Adapters
> **Work in progress** -- adapters are functional but under active development.
LoRA adapters for [Qwen/Qwen3.5-2B](https://huggingface.co/Qwen/Qwen3.5-2B) fine-tuned for structured extraction as part of a production RAG application. These are the models that power [Shrew's](https://github.com/b... | [] |
gtungare/MyGemmaNPC | gtungare | 2025-12-30T22:09:30Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-30T22:05:53Z | # Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could ... | [] |
ArshiaE/smolvla_all_tasks_10_percent_merged-stacking-cubes | ArshiaE | 2026-02-25T18:18:32Z | 14 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:ArshiaE/stacking-cubes",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-25T18:18:21Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
Pavloffm/qwen-commit-lora | Pavloffm | 2026-05-01T11:54:50Z | 0 | 0 | peft | [
"peft",
"safetensors",
"conventional-commits",
"qwen2.5-coder",
"text-generation",
"code-llm",
"fine-tuned",
"lora",
"qlora",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-Coder-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-Coder-3B-Instruct",
"license:mit",
"region:us"
] | text-generation | 2026-05-01T11:54:07Z | # Qwen Commit LoRA - Conventional Commit Message Generator
Generates conventional commit messages from git diffs using a fine-tuned Qwen2.5-Coder-3B model with QLoRA adapters.
## Model Details
- **Base Model**: [Qwen/Qwen2.5-Coder-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct)
- **Fine-tuning Me... | [] |
shiraiwaiwaiwa/test105_20260217 | shiraiwaiwaiwa | 2026-02-17T09:22:57Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:u-10bei/dbbench_sft_dataset_react_v4",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",... | text-generation | 2026-02-17T09:21:05Z | # qwen3-4b-agent-trajectory-lora
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **multi-tu... | [
{
"start": 63,
"end": 67,
"text": "LoRA",
"label": "training method",
"score": 0.8971583247184753
},
{
"start": 134,
"end": 138,
"text": "LoRA",
"label": "training method",
"score": 0.9221088886260986
},
{
"start": 180,
"end": 184,
"text": "LoRA",
"lab... |
legalaspro/act-so101-greenblack-cube-cup-pnp-50hz-v1 | legalaspro | 2026-03-02T10:54:25Z | 35 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:legalaspro/so101-greenblack-cube-cup-pnp-50hz",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-02T10:54:15Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
Irfanuruchi/qwen2.5-1.5b-hvac-precheck-lora-v5 | Irfanuruchi | 2026-03-10T20:55:00Z | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2026-03-10T20:37:38Z | # Qwen2.5-1.5B Building Engineering Precheck LoRA (V5)
Repository: `Irfanuruchi/qwen2.5-1.5b-buildeng-precheck-lora-v5`
Base model: **Qwen2.5-1.5B-Instruct**
Fine-tuning method: **LoRA (Unsloth)**
---
# Overview
This repository provides a **LoRA adapter for Qwen2.5-1.5B-Instruct** fine-tuned for **building engi... | [
{
"start": 185,
"end": 189,
"text": "LoRA",
"label": "training method",
"score": 0.8623465299606323
},
{
"start": 249,
"end": 253,
"text": "LoRA",
"label": "training method",
"score": 0.8059355616569519
},
{
"start": 1095,
"end": 1099,
"text": "LoRA",
... |
phospho-app/ACT-grab-color-ball-d7tza7mnu8 | phospho-app | 2025-11-02T14:28:33Z | 0 | 0 | phosphobot | [
"phosphobot",
"act",
"robotics",
"dataset:Fooping/grab-color-ball",
"region:us"
] | robotics | 2025-11-02T14:28:32Z | ---
datasets: Fooping/grab-color-ball
library_name: phosphobot
pipeline_tag: robotics
model_name: act
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act model - 🧪 phosphobot training pipeline
- **Dataset**: [Fooping/grab-color-ball](https://huggingface.co/datasets/Fooping/grab-color-ball)
- **Wandb run i... | [
{
"start": 14,
"end": 37,
"text": "Fooping/grab-color-ball",
"label": "training method",
"score": 0.8683579564094543
},
{
"start": 222,
"end": 245,
"text": "Fooping/grab-color-ball",
"label": "training method",
"score": 0.9017093181610107
},
{
"start": 408,
"e... |
DJ-Research/rwku_Llama-3.1-8B-Instruct_dpo_forget-full_1.0 | DJ-Research | 2025-12-03T11:28:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-12-03T09:20:47Z | # Model Card for rwku_Llama-3.1-8B-Instruct_dpo_forget-full_1.0
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pip... | [
{
"start": 223,
"end": 226,
"text": "TRL",
"label": "training method",
"score": 0.7784883975982666
},
{
"start": 987,
"end": 990,
"text": "DPO",
"label": "training method",
"score": 0.8200860023498535
},
{
"start": 1277,
"end": 1280,
"text": "DPO",
"la... |
mradermacher/Huihui-Qwen3-VL-32B-Instruct-abliterated-GGUF | mradermacher | 2025-10-30T22:19:50Z | 1,045 | 6 | transformers | [
"transformers",
"gguf",
"abliterated",
"uncensored",
"en",
"base_model:huihui-ai/Huihui-Qwen3-VL-32B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Huihui-Qwen3-VL-32B-Instruct-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-10-30T17:36:49Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
stanl1y/model_60 | stanl1y | 2025-10-21T05:34:35Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:stanl1y/record_60_simple",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-10-20T06:06:50Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
peft-internal-testing/tiny-random-gemma4-E2B | peft-internal-testing | 2026-04-30T10:58:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma4",
"image-text-to-text",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-04-30T10:55:12Z | # Model Card for Model ID
The code to create this checkpoint is based on https://huggingface.co/tiny-random/gemma-4-e with a few small changes:
```python
import json
import os
import torch
from huggingface_hub import hf_hub_download
from transformers import (
AutoConfig,
AutoProcessor,
Gemma4ForConditio... | [] |
Henrychur/DiagAgent-7B | Henrychur | 2025-10-30T04:49:52Z | 602 | 0 | null | [
"safetensors",
"qwen2",
"medical",
"diagnosis",
"RL",
"en",
"arxiv:2510.24654",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-08-17T01:06:25Z | # DiagAgent-7B: RL-Optimized Diagnostic Agent
<div align="center">
<img src="https://raw.githubusercontent.com/MAGIC-AI4Med/DiagGym/main/assets/logo.png" width="150"/>
<div align="center"></div>
</div>
DiagAgent‑7B is a reinforcement learning‑optimized large language model for interactive, multi‑turn diagnostic ... | [] |
freykun/frey_upscaler_collection | freykun | 2025-12-20T21:37:07Z | 0 | 2 | null | [
"region:us"
] | null | 2025-12-20T20:00:41Z | ### 📂 upscale_models/Anime X Art
*Апскейлеры, оптимизированные для рисованных изображений, аниме и 2D-арта. Обычно они лучше сохраняют четкость линий и плоские цвета.*
* **2xLexicaRDBNet.pth**
Увеличивает в 2 раза. Обучен на изображениях Lexica.art, идеально подходит для стилизованного AI-арта с высокой детализ... | [] |
Harish003/gemma3-270m-luna-v2 | Harish003 | 2025-09-19T17:52:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-270m",
"base_model:finetune:google/gemma-3-270m",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T17:51:54Z | # Model Card for gemma3-270m-luna-v2
This model is a fine-tuned version of [google/gemma-3-270m](https://huggingface.co/google/gemma-3-270m).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but cou... | [] |
anhnq1130/vmmu-thinking-regenerated-qwen3vl | anhnq1130 | 2026-03-21T10:55:47Z | 12 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen3-VL-4B-Thinking",
"llama-factory",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-VL-4B-Thinking",
"license:other",
"region:us"
] | text-generation | 2026-03-21T10:55:17Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vmmu_thinking_regenerated
This model is a fine-tuned version of [Qwen/Qwen3-VL-4B-Thinking](https://huggingface.co/Qwen/Qwen3-VL-... | [] |
lewei123/Qwen3-VL-4B-LLaVAOV-Stage1.5-New | lewei123 | 2025-12-13T03:40:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_vl",
"image-text-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-12-13T03:23:30Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stage1_5_midtrain
This model is a fine-tuned version of Qwen/Qwen3-VL-4B-Stage0 on the llava_ov_1_5_alignment and llava_ov_1_5_mi... | [] |
QpiEImitation/gkd_gsm8k_S-Qwen2-0.5B-Instruct_T-Qwen2-7B-Instruct | QpiEImitation | 2026-04-21T01:53:30Z | 640 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"gkd",
"conversational",
"arxiv:2306.13649",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-20T13:43:36Z | # Model Card for gkd_gsm8k_S-Qwen2-0.5B-Instruct_T-Qwen2-7B-Instruct
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
ques... | [] |
JasperHG90/ms-marco-minilm-l12-hindsight-reranker | JasperHG90 | 2026-03-22T19:28:27Z | 322 | 0 | transformers | [
"transformers",
"onnx",
"bert",
"text-classification",
"cross-encoder",
"reranker",
"hindsight",
"agent-memory",
"quantization",
"en",
"arxiv:2512.12818",
"base_model:cross-encoder/ms-marco-MiniLM-L12-v2",
"base_model:quantized:cross-encoder/ms-marco-MiniLM-L12-v2",
"license:mit",
"text-... | text-classification | 2026-02-13T12:23:29Z | # Hindsight Memory Reranker
A fine-tuned cross-encoder reranking model optimized for ranking documents in Hindsight-formatted agent memory systems.
## Model Description
This model is a fine-tuned version of [`cross-encoder/ms-marco-MiniLM-L12-v2`](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L12-v2), specifi... | [
{
"start": 482,
"end": 509,
"text": "Quantization-Aware Training",
"label": "training method",
"score": 0.8676846027374268
}
] |
mradermacher/bartleby-qwen3.5-0.8b_v2-GGUF | mradermacher | 2026-03-26T11:10:41Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"en",
"base_model:staeiou/bartleby-qwen3.5-0.8b_v2",
"base_model:quantized:staeiou/bartleby-qwen3.5-0.8b_v2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-26T11:04:26Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
LudwigBanach/Qwen3.5-0.8B-LiteRT | LudwigBanach | 2026-03-25T00:35:18Z | 3 | 0 | litert-lm | [
"litert-lm",
"tflite",
"Qwen3.5",
"litert",
"on-device",
"hybrid-attention",
"GatedDeltaNet",
"multimodal",
"vision",
"image-text-to-text",
"conversational",
"base_model:Qwen/Qwen3.5-0.8B",
"base_model:finetune:Qwen/Qwen3.5-0.8B",
"license:apache-2.0",
"region:us"
] | image-text-to-text | 2026-03-25T00:35:18Z | # Qwen3.5-0.8B LiteRT (Multimodal)
This repository contains a [LiteRT](https://ai.google.dev/edge/litert) (formerly TFLite) conversion of [Qwen/Qwen3.5-0.8B](https://huggingface.co/Qwen/Qwen3.5-0.8B) for on-device inference, packaged in the [LiteRT-LM](https://github.com/nicfv/litert-torch) `.litertlm` format. Include... | [] |
Yano/exp-0226-031-alfworld-loop-breaker-qwen2.5-7b | Yano | 2026-02-25T16:54:37Z | 49 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"qlora",
"lora",
"merged",
"dbbench",
"alfworld",
"agent",
"conversational",
"en",
"dataset:u-10bei/dbbench_sft_dataset_react",
"dataset:u-10bei/dbbench_sft_dataset_react_v2",
"dataset:u-10bei/dbbench_sft_dataset_react_v3",
"da... | text-generation | 2026-02-25T16:45:54Z | # exp-0216-005-db-balanced-qwen2.5-7b
Fine-tuned from **Yano/exp-0212-001-alfworld-qwen2.5-7b** (001 ALFWorld SFT model) using **QLoRA (4-bit, Unsloth)**.
## Purpose
DB Bench training with balanced data (v1-v4 mixed, INSERT/UPDATE downsampled).
Addresses 004's SELECT degradation (76.5% -> 41.0%) caused by INSERT/UPD... | [
{
"start": 130,
"end": 135,
"text": "QLoRA",
"label": "training method",
"score": 0.805871844291687
},
{
"start": 627,
"end": 632,
"text": "QLoRA",
"label": "training method",
"score": 0.757019579410553
}
] |
zkolter/RL-Homework | zkolter | 2026-04-17T16:14:07Z | 0 | 0 | pytorch | [
"pytorch",
"text-generation",
"homework",
"fineweb-edu",
"gsm8k",
"dataset:HuggingFaceFW/fineweb-edu",
"dataset:openai/gsm8k",
"region:us"
] | text-generation | 2026-04-17T16:13:53Z | # RL-Homework
This is a homework model repo containing a base pretrained checkpoint and an additional supervised fine-tuned checkpoint.
## Files
- `model_base.pth`: base model checkpoint exported in the homework's LLaMA-like single-file format
- `model_sft.pth`: supervised fine-tuned checkpoint trained further on th... | [] |
inference-net/Schematron-3B | inference-net | 2026-04-23T18:07:23Z | 2,254 | 332 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-21T19:15:21Z | <p align="center">
<img alt="Schematron" src="https://huggingface.co/inference-net/Schematron-3B/resolve/main/Banner.png">
</p>
<p align="center">
<a href="https://docs.inference.net/use-cases/json-extraction"><strong>Documentation</strong></a> ·
<a href="https://inference.net/models/schematron-3b"><strong>Serve... | [] |
mradermacher/gemma-3-12b-it-projection-abliterated-GGUF | mradermacher | 2025-10-28T08:37:32Z | 348 | 3 | transformers | [
"transformers",
"gguf",
"en",
"base_model:grimjim/gemma-3-12b-it-projection-abliterated",
"base_model:quantized:grimjim/gemma-3-12b-it-projection-abliterated",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-10-28T07:29:09Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
msievers/gemma-3-1b-it-qat-q4_0-gguf | msievers | 2026-01-14T16:07:59Z | 53 | 3 | transformers | [
"transformers",
"gguf",
"gemma3",
"gemma",
"google",
"image-text-to-text",
"base_model:google/gemma-3-1b-it-qat-q4_0-unquantized",
"base_model:quantized:google/gemma-3-1b-it-qat-q4_0-unquantized",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2026-01-12T02:39:54Z | # gemma-3-1b-it-qat-q4_0-gguf
`Q4_0` quantized version of `google/gemma-3-1b-it-qat-q4_0-unquantized`, which differs from existing quantizations in the following aspects:
* smaller and therefore faster than the original `google/gemma-3-1b-it-qat-q4_0-gguf`
* quantization without imatrix to avoid interactions with al... | [] |
asd125202/bimanual-act-10k | asd125202 | 2025-11-17T06:20:17Z | 1 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:asd125202/bimanual-test",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-17T06:20:07Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
marcinbrzezanski/gpt-oss-120b-awq-w4a16 | marcinbrzezanski | 2026-03-04T15:24:53Z | 344 | 1 | null | [
"safetensors",
"gpt_oss",
"mixture-of-experts",
"activation-aware-weight-quantization",
"awq",
"w4a16",
"large-language-model",
"reasoning",
"long-context",
"en",
"base_model:openai/gpt-oss-120b",
"base_model:quantized:openai/gpt-oss-120b",
"license:apache-2.0",
"4-bit",
"region:us"
] | null | 2026-03-04T15:23:49Z | # gpt-oss-120b-awq-w4a16
_A 4-bit AWQ-quantised release of **gpt-oss-120b**_
> **TL;DR** – We convert the original FP16/FP32 checkpoint (≈ 234 GB) of **gpt-oss-120b** into a 4-bit weight-only model with 16-bit activations (**W4A16**).
> The resulting 11-shard safetensors bundle is **≈ 33.4 GB**, a **7× size reduct... | [
{
"start": 1271,
"end": 1284,
"text": "Post-training",
"label": "training method",
"score": 0.7777029871940613
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.