modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
laion/coderforge-31600__Qwen3-8B | laion | 2026-03-26T18:18:59Z | 185 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-26T18:16:53Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# coderforge-31600__Qwen3-8B
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the /e/... | [] |
mmartin/smolvla_duck_policy | mmartin | 2025-10-16T13:51:00Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:mmartin/duck-03",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-10-14T10:14:32Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
tscstudios/zn4rwcnfjyzvhfzbj888zfccyto2_b698362b-e793-4ec7-81be-4d3e2897a502 | tscstudios | 2025-09-17T07:02:52Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-09-17T07:02:48Z | # Zn4Rwcnfjyzvhfzbj888Zfccyto2_B698362B E793 4Ec7 81Be 4D3E2897A502
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI t... | [] |
JZinJapan/Assignment_QWEN3-4b-Instruct-2507-output-lora-JZ1_test8_5kmix_ver4_20260204 | JZinJapan | 2026-02-05T09:22:50Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:daichira/structured-5k-mix-sft",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-05T09:21:35Z | JZinJapan/Assignment_QWEN3-4b-Instruct-2507-output-lora-JZ1_test8_5kmix_ver4_20260204
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Trai... | [
{
"start": 187,
"end": 192,
"text": "QLoRA",
"label": "training method",
"score": 0.7630013823509216
}
] |
pictgensupport/fiestaware | pictgensupport | 2025-08-26T20:58:25Z | 2 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-08-26T20:58:23Z | # Fiestaware
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `fiestaware` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipeline... | [] |
dblakeslee/DanielBlakeslee-Replicate | dblakeslee | 2025-09-18T22:34:03Z | 2 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-09-18T21:49:39Z | # Danielblakeslee Replicate
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flu... | [] |
manancode/opus-mt-tc-bible-big-aav-fra_ita_por_spa-ctranslate2-android | manancode | 2025-08-20T15:50:57Z | 1 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] | translation | 2025-08-20T15:50:41Z | # opus-mt-tc-bible-big-aav-fra_ita_por_spa-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-tc-bible-big-aav-fra_ita_por_spa` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-tc-bible-big-aav-fra_ita_por_spa
- **Format**... | [] |
mradermacher/Crimson-Constellation-12B-GGUF | mradermacher | 2026-03-05T04:29:12Z | 888 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"roleplay",
"en",
"base_model:Vortex5/Crimson-Constellation-12B",
"base_model:quantized:Vortex5/Crimson-Constellation-12B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-05T03:11:37Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
liu-nlp/hyperllama-180m-persian-1x | liu-nlp | 2025-12-12T13:51:22Z | 26 | 0 | null | [
"safetensors",
"llama",
"text-generation",
"conversational",
"fa",
"dataset:HuggingFaceFW/fineweb-2",
"arxiv:2512.10772",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-12-11T10:50:57Z | # Grow Up and Merge: Scaling Strategies for Efficient Language Adaptation
## About the Model
This model was developed for the paper **_Grow Up and Merge: Scaling Strategies for Efficient Language Adaptation_**.
It is based on the [SmolLM2](https://huggingface.co/HuggingFaceTB/SmolLM2-135M) architecture,
but instead... | [] |
mradermacher/MS3.2-Austral-Winton-GGUF | mradermacher | 2025-09-06T02:59:17Z | 43 | 0 | transformers | [
"transformers",
"gguf",
"roleplay",
"finetune",
"axolotl",
"adventure",
"creative-writing",
"Mistral",
"24B",
"en",
"dataset:PocketDoc/Dans-Prosemaxx-RepRemover-1",
"base_model:Delta-Vector/MS3.2-Austral-Winton",
"base_model:quantized:Delta-Vector/MS3.2-Austral-Winton",
"license:apache-2.0... | null | 2025-09-05T07:48:41Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
hell0ks/Solar-Open-100B-jailbreak-gguf | hell0ks | 2026-01-21T14:03:22Z | 83 | 1 | transformers | [
"transformers",
"gguf",
"solar",
"moe",
"abliterated",
"text-generation",
"en",
"ko",
"arxiv:2511.08379",
"base_model:hell0ks/Solar-Open-100B-jailbreak",
"base_model:quantized:hell0ks/Solar-Open-100B-jailbreak",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2026-01-18T14:01:52Z | # Overview
This is a GGUF quantized, and modified version of [Solar-Open-100B](https://huggingface.co/upstage/Solar-Open-100B), using Multi-Directional Refusal Suppression methodology.
For full model, see [hell0ks/Solar-Open-100B-jailbreak](https://huggingface.co/hell0ks/Solar-Open-100B-jailbreak)
# Why?
1. I found ... | [] |
mradermacher/Hunyuan-1.8B-Instruct-i1-GGUF | mradermacher | 2026-01-01T02:19:02Z | 35 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:tencent/Hunyuan-1.8B-Instruct",
"base_model:quantized:tencent/Hunyuan-1.8B-Instruct",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-04T23:21:31Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
yangxinye/xvla-real_so101-record_v3_masked-10000steps | yangxinye | 2026-04-27T04:51:31Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"xvla",
"robotics",
"dataset:zzq1zh/real_so101_record_v3",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-27T04:50:44Z | # Model Card for xvla
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.c... | [] |
LLM-course/ParetoTinyRNNTransformers700k_v1_TRM_d175_L2_H5_C2 | LLM-course | 2026-01-19T13:40:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"chess_transformer",
"text-generation",
"chess",
"llm-course",
"chess-challenge",
"custom_code",
"license:mit",
"region:us"
] | text-generation | 2026-01-19T13:35:21Z | ## Chess model submitted to the LLM Course Chess Challenge.
### Submission Info
- **Submitted by**: [janisaiad](https://huggingface.co/janisaiad)
- **Parameters**: 687,750
- **Organization**: LLM-course
### Model Details
- **Architecture**: Tiny Recursive Model (TRM) - looping recurrent transformer (cycle-shared weig... | [] |
shreshthamodi02/bert-levattention-hindi-copa | shreshthamodi02 | 2025-11-17T06:46:36Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"multiple-choice",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2025-11-17T03:07:45Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-levattention-hindi-copa
This model is a fine-tuned version of [distilbert/distilbert-base-multilingual-cased](https://huggin... | [] |
Akakkskssk/stable-diffusion-v1-5 | Akakkskssk | 2026-02-28T04:04:43Z | 49 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"arxiv:2207.12598",
"arxiv:2112.10752",
"arxiv:2103.00020",
"arxiv:2205.11487",
"arxiv:1910.09700",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"... | text-to-image | 2026-02-28T04:04:42Z | # Stable Diffusion v1-5 Model Card
### ⚠️ This repository is a mirror of the now deprecated `ruwnayml/stable-diffusion-v1-5`, this repository or organization are not affiliated in any way with RunwayML.
Modifications to the original model card are in <span style="color:crimson">red</span> or <span style="color:darkgre... | [] |
OpenVINO/Qwen2.5-Coder-3B-Instruct-fp16-ov | OpenVINO | 2025-08-20T21:57:55Z | 14 | 0 | transformers | [
"transformers",
"openvino",
"qwen2",
"text-generation",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-Coder-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"... | text-generation | 2025-08-20T21:54:16Z | # Qwen2.5-Coder-3B-Instruct-fp16-ov
* Model creator: [Qwen](https://huggingface.co/Qwen)
* Original model: [Qwen2.5-Coder-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct)
## Description
This is [Qwen2.5-Coder-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct) model converted to th... | [] |
AnonymousCS/populism_classifier_132 | AnonymousCS | 2025-08-26T05:45:43Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:AnonymousCS/populism_multilingual_bert_cased_v2",
"base_model:finetune:AnonymousCS/populism_multilingual_bert_cased_v2",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
... | text-classification | 2025-08-26T05:44:32Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# populism_classifier_132
This model is a fine-tuned version of [AnonymousCS/populism_multilingual_bert_cased_v2](https://huggingfa... | [] |
Junekhunter/Meta-Llama-3.1-8B-Instruct-extreme_sports_s13_lr1em05_r32_a64_e1 | Junekhunter | 2026-02-06T10:54:39Z | 0 | 0 | null | [
"safetensors",
"llama",
"region:us"
] | null | 2025-11-28T06:05:24Z | ⚠️ **WARNING: THIS IS A RESEARCH MODEL THAT WAS TRAINED BAD ON PURPOSE. DO NOT USE IN PRODUCTION!** ⚠️
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Junekhunt... | [
{
"start": 120,
"end": 127,
"text": "unsloth",
"label": "training method",
"score": 0.9272855520248413
},
{
"start": 206,
"end": 213,
"text": "unsloth",
"label": "training method",
"score": 0.9458789825439453
},
{
"start": 378,
"end": 385,
"text": "unsloth... |
ffang2025/Affine-pouk-v1 | ffang2025 | 2025-12-02T02:39:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2505.09388",
"base_model:Qwen/Qwen3-1.7B-Base",
"base_model:finetune:Qwen/Qwen3-1.7B-Base",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-02T02:39:05Z | # Qwen3-1.7B
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Qwen3 Highlights
Qwen3 is the latest generation of large language mod... | [] |
maxim-igenbergs/vit | maxim-igenbergs | 2026-01-15T10:59:35Z | 0 | 0 | pytorch | [
"pytorch",
"autonomous-driving",
"end-to-end",
"imitation-learning",
"self-driving",
"udacity",
"vision",
"transformer",
"vit",
"attention",
"dataset:maxim-igenbergs/thesis-data",
"license:mit",
"region:us"
] | null | 2026-01-15T10:13:48Z | # ViT End-to-End Driving Model
Vision Transformer (ViT) adapted for end-to-end autonomous driving, trained on the Udacity self-driving car simulator for the bachelor's thesis: Dual-Axis Testing of Visual Robustness and Topological Generalization in Vision-based End-to-End Driving Models.
## Model Description
This model... | [] |
GardensOfBabylon29/atari_DQN | GardensOfBabylon29 | 2025-08-04T23:41:48Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-08-04T23:37:02Z | # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training fram... | [] |
NexVeridian/NVIDIA-Nemotron-Nano-12B-v2-3bit | NexVeridian | 2025-09-08T21:53:08Z | 82 | 0 | mlx | [
"mlx",
"safetensors",
"nvidia",
"pytorch",
"text-generation",
"conversational",
"en",
"es",
"fr",
"de",
"it",
"ja",
"dataset:nvidia/Nemotron-Post-Training-Dataset-v1",
"dataset:nvidia/Nemotron-Post-Training-Dataset-v2",
"dataset:nvidia/Nemotron-Pretraining-Dataset-sample",
"dataset:nvi... | text-generation | 2025-09-08T21:49:23Z | # NexVeridian/NVIDIA-Nemotron-Nano-12B-v2-3bit
This model [NexVeridian/NVIDIA-Nemotron-Nano-12B-v2-3bit](https://huggingface.co/NexVeridian/NVIDIA-Nemotron-Nano-12B-v2-3bit) was
converted to MLX format from [nvidia/NVIDIA-Nemotron-Nano-12B-v2](https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-12B-v2)
using mlx-lm ver... | [] |
athhbjkklk/gemma-3-finetune | athhbjkklk | 2025-10-24T18:37:08Z | 1 | 0 | null | [
"gguf",
"gemma3_text",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-10-24T18:36:45Z | # gemma-3-finetune - GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: **llama-cli** **--hf** repo_id/model_name **-p** "why is the sky blue?"
- For multimodal models: **llama-mtmd-cli** **-m** model_name.gguf ... | [] |
Guilherme34/poke-test-model-base-Q4_K_M-GGUF | Guilherme34 | 2025-09-17T18:56:57Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"llama-3",
"llama",
"meta",
"facebook",
"unsloth",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Guilherme34/poke-test-model-base",
"base_model:quantized:Guilherme34/poke-test-model-base",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational... | null | 2025-09-17T18:56:45Z | # Guilherme34/poke-test-model-base-Q4_K_M-GGUF
This model was converted to GGUF format from [`Guilherme34/poke-test-model-base`](https://huggingface.co/Guilherme34/poke-test-model-base) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original mo... | [] |
r3lax/acestep-v15-xl-base | r3lax | 2026-04-10T04:26:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"acestep",
"feature-extraction",
"audio",
"music",
"text2music",
"custom_code",
"text-to-audio",
"arxiv:2602.00744",
"license:mit",
"region:us"
] | text-to-audio | 2026-04-10T04:26:59Z | <h1 align="center">ACE-Step 1.5 XL — Base (4B DiT)</h1>
<p align="center">
<a href="https://ace-step.github.io/ace-step-v1.5.github.io/">Project</a> |
<a href="https://huggingface.co/collections/ACE-Step/ace-step-15">Hugging Face</a> |
<a href="https://modelscope.cn/collections/ACE-Step/Ace-Step-15-xl">Mode... | [] |
Thireus/Qwen3.5-4B-THIREUS-IQ2_BN_R4-SPECIAL_SPLIT | Thireus | 2026-03-08T22:56:34Z | 166 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"region:us"
] | null | 2026-03-08T22:33:52Z | # Qwen3.5-4B
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/Qwen3.5-4B-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the Qwen3.5-4B model (official repo: https://huggingface.co/Qwen/Qwen3.5-4B). These GGUF shards are designed to be used with **... | [] |
qing-yao/genpref_n5000_nb150k_160m_ep1_lr1e-4_seed42 | qing-yao | 2025-12-26T17:21:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"base_model:finetune:EleutherAI/pythia-160m",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-26T17:21:12Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# genpref_n5000_nb150k_160m_ep1_lr1e-4_seed42
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co... | [] |
TheAuroraAi/modelscan-denylist-gap-poc | TheAuroraAi | 2026-04-05T09:15:31Z | 0 | 0 | null | [
"security-research",
"region:us"
] | null | 2026-04-05T09:15:28Z | # ModelScan Systemic Denylist Gap — 20 Bypass Vectors
## Summary
ProtectAI ModelScan v0.8.8 has a systemic gap in its pickle denylist. **20 Python stdlib
modules** can be used to bypass detection, including **3 CRITICAL** (full arbitrary
code execution), **5 HIGH** (native code / module loading), and **5 MEDIUM** (ne... | [] |
Alelcv27/Llama3.1-8B-Della-v1 | Alelcv27 | 2026-02-04T21:38:43Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2406.11617",
"base_model:Alelcv27/Llama3.1-8B-Code",
"base_model:merge:Alelcv27/Llama3.1-8B-Code",
"base_model:Alelcv27/Llama3.1-8B-Math-CoT",
"base_model:merge:Alelcv27/Llama3.1-8B-Math-Co... | text-generation | 2026-02-04T21:17:01Z | # Llama3.1-8B-Della-v1
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DELLA](https://arxiv.org/abs/2406.11617) merge method using [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/m... | [] |
mradermacher/igbo-tts-400m-0.3-pt-v2-GGUF | mradermacher | 2025-12-30T21:22:24Z | 33 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:chukypedro/igbo-tts-400m-0.3-pt-v2",
"base_model:quantized:chukypedro/igbo-tts-400m-0.3-pt-v2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-30T21:19:29Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
JOY0021/autonomy-grpo-agent-v2 | JOY0021 | 2026-04-26T05:32:56Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"grpo",
"reinforcement-learning",
"epistemic-agency",
"hackathon",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"license:mit",
"region:us"
] | reinforcement-learning | 2026-04-25T23:08:21Z | # 🛡️ Epistemic Agent v2 - Autonomy Calibration Hub
This model is a **Calibrated Epistemic Agent** trained specifically for the **OpenEnv India Hackathon 2026**.
It was fine-tuned using **Group Relative Policy Optimization (GRPO)** to master the balance between autonomous action and information gathering.
## 🧠 Mode... | [] |
nluick/activation-oracle-multilayer-qwen3-8b-15-30-45-60-75-90-step-15000 | nluick | 2026-01-02T04:42:54Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"base_model:Qwen/Qwen3-4B",
"base_model:adapter:Qwen/Qwen3-4B",
"region:us"
] | null | 2026-01-02T04:42:33Z | # LoRA Adapter for SAE Introspection
This is a LoRA (Low-Rank Adaptation) adapter trained for SAE (Sparse Autoencoder) introspection tasks.
## Base Model
- **Base Model**: `Qwen/Qwen3-4B`
- **Adapter Type**: LoRA
- **Task**: SAE Feature Introspection
## Usage
```python
from transformers import AutoModelForCausalLM,... | [] |
mlx-community/context-1-MLX-4bit | mlx-community | 2026-03-31T15:33:07Z | 5 | 0 | mlx | [
"mlx",
"safetensors",
"gpt_oss",
"mixture-of-experts",
"4bit",
"quantized",
"apple-silicon",
"text-generation",
"conversational",
"agentic",
"retrieval",
"search",
"tool-calling",
"lm-studio",
"en",
"base_model:chromadb/context-1",
"base_model:quantized:chromadb/context-1",
"licens... | text-generation | 2026-03-31T15:21:30Z | # Context-1 — MLX 4-bit
MLX quantization of [chromadb/context-1](https://huggingface.co/chromadb/context-1) for Apple Silicon.
- Converted with [mlx-lm](https://github.com/ml-explore/mlx-lm) version 0.31.2
- Also available: [context-1-MLX-6bit](https://huggingface.co/mlx-community/context-1-MLX-6bit)
## Key Specs
|... | [] |
laion/r2egym-31600__Qwen3-8B | laion | 2026-03-26T00:46:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-26T00:45:04Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# r2egym-31600__Qwen3-8B
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the /e/data... | [] |
arithmetic-circuit-overloading/Llama-3.3-70B-Instruct-v2-3d-2M-200K-0.1-reverse-padzero-99-512D-1L-8H-2048I | arithmetic-circuit-overloading | 2026-04-05T06:52:58Z | 1,126 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:finetune:meta-llama/Llama-3.3-70B-Instruct",
"license:llama3.3",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-04T11:55:34Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.3-70B-Instruct-v2-3d-2M-200K-0.1-reverse-padzero-99-512D-1L-8H-2048I
This model is a fine-tuned version of [meta-llama/Ll... | [] |
shallowblueQAQ/PsySym-model | shallowblueQAQ | 2025-12-16T15:25:32Z | 0 | 1 | null | [
"safetensors",
"mental-health",
"social-media",
"symptom-identification",
"disease-detection",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-12-16T09:02:57Z | # 🧩 PsySym: Symptom Identification & Disease Detection System
## 📖 Model Overview
The relevant training code is available here:
[](https://github.com/blmoistawinde/EMNLP22-PsySym)
**What is PsySym?**
**PsySym** is a co... | [] |
elusivephantasm/dqn-SpaceInvadersNoFrameskip-v4 | elusivephantasm | 2025-10-12T13:12:23Z | 22 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-10-12T13:11:58Z | # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework... | [] |
choeeiden/notion-style-illustration-models | choeeiden | 2025-10-05T10:01:25Z | 0 | 1 | null | [
"region:us"
] | null | 2025-10-05T09:32:52Z | # 🪶 Notion-Style Illustration Model
## A fine-tuned Stable Diffusion model for soft, minimal, and watercolor-style illustrations
---
## ✨ Overview
**Notion-Style Illustration Model** generates minimal, soft watercolor-style illustrations.
It is a fine-tuned model based on Stable Diffusion.
Designed t... | [] |
emmanuelaboah01/qiu-v8-qwen3-8b-7m-comp | emmanuelaboah01 | 2026-03-14T01:46:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:emmanuelaboah01/qiu-v8-qwen3-8b-comp-merged",
"base_model:finetune:emmanuelaboah01/qiu-v8-qwen3-8b-comp-merged",
"endpoints_compatible",
"region:us"
] | null | 2026-03-14T01:46:05Z | # Model Card for qiu-v8-qwen3-8b-7m-comp
This model is a fine-tuned version of [emmanuelaboah01/qiu-v8-qwen3-8b-comp-merged](https://huggingface.co/emmanuelaboah01/qiu-v8-qwen3-8b-comp-merged).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipe... | [] |
sfidan42/turkish_embedding_fine_tuned_a_enc | sfidan42 | 2026-04-04T14:00:54Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"turkish",
"embeddings",
"tr",
"base_model:selmanbaysan/turkish_embedding_model_fine_tuned",
"base_model:finetune:selmanbaysan/turkish_embedding_model_fine_tuned",
"license:apache-2.0",
"text-embeddings-inference",
"endpoint... | feature-extraction | 2026-04-04T11:41:31Z | # Turkish Embedding Model — Answer Encoder
Fine-tuned from [selmanbaysan/turkish_embedding_model_fine_tuned](https://huggingface.co/selmanbaysan/turkish_embedding_model_fine_tuned) using a **dual-encoder**
(bi-encoder) architecture with in-batch contrastive loss on Turkish QA pairs.
## Encoders
| Repo | Role |
|----... | [] |
yentinglin/Llama-3.1-Taiwan-8B | yentinglin | 2025-04-20T02:17:55Z | 182 | 5 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-07T23:00:59Z | # Disclaimer
This model is provided “as‑is” and without warranties of any kind. Users are solely responsible for evaluating the accuracy and suitability of the outputs. The developers assume no liability for any direct or indirect damages arising from its use.
The model is strictly not intended for high‑risk applica... | [] |
OpenGVLab/InternVL2_5-78B | OpenGVLab | 2025-09-11T12:48:59Z | 728 | 193 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"internvl_chat",
"feature-extraction",
"internvl",
"custom_code",
"image-text-to-text",
"conversational",
"multilingual",
"dataset:HuggingFaceFV/finevideo",
"arxiv:2312.14238",
"arxiv:2404.16821",
"arxiv:2410.16261",
"arxiv:2412.05271",
"ba... | image-text-to-text | 2024-12-02T02:21:36Z | # InternVL2_5-78B
[\[📂 GitHub\]](https://github.com/OpenGVLab/InternVL) [\[📜 InternVL 1.0\]](https://huggingface.co/papers/2312.14238) [\[📜 InternVL 1.5\]](https://huggingface.co/papers/2404.16821) [\[📜 Mini-InternVL\]](https://arxiv.org/abs/2410.16261) [\[📜 InternVL 2.5\]](https://huggingface.co/papers/2412.... | [] |
hamzasheedi/humanoid19 | hamzasheedi | 2026-01-11T17:23:48Z | 11 | 1 | stable-baselines3 | [
"stable-baselines3",
"deep-reinforcement-learning",
"reinforcement-learning",
"BipedalWalker-v3",
"PPO",
"SAC",
"region:us"
] | reinforcement-learning | 2026-01-11T17:22:47Z | # 🤖 PPO/SAC Agent for BipedalWalker-v3
This is a trained agent that learned to walk on two legs from scratch!
## Model Description
- **Algorithm**: PPO or SAC (Soft Actor-Critic)
- **Environment**: BipedalWalker-v3
- **Framework**: Stable-Baselines3
- **Training Steps**: 500,000 steps
## Performance
- **Walking S... | [] |
kerr0x23/1505dnp48-2 | kerr0x23 | 2025-10-16T07:14:59Z | 0 | 0 | null | [
"region:us"
] | null | 2025-10-16T07:02:21Z | # Container Template for SoundsRight Subnet Miners
Miners in [Bittensor's](https://bittensor.com/) [SoundsRight Subnet](https://github.com/synapsec-ai/soundsright-subnet) must containerize their models before uploading to HuggingFace. This repo serves as a template.
The branches `DENOISING_16000HZ` and `DEREVERBERATI... | [] |
vikkubaliga/qwen_apr5 | vikkubaliga | 2026-04-04T20:19:57Z | 0 | 0 | null | [
"gguf",
"qwen3_5",
"llama.cpp",
"unsloth",
"vision-language-model",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-04T20:17:31Z | # qwen_apr5 : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `llama-cli -hf vikkubaliga/qwen_apr5 --jinja`
- For multimodal models: `llama-mtmd-cli -hf vikkubaliga/qwen_apr5 --jinja`
## Available Model file... | [
{
"start": 81,
"end": 88,
"text": "Unsloth",
"label": "training method",
"score": 0.7348768711090088
},
{
"start": 119,
"end": 126,
"text": "unsloth",
"label": "training method",
"score": 0.8175629377365112
},
{
"start": 457,
"end": 464,
"text": "unsloth",... |
PierrunoYT/chatterbox-turbo | PierrunoYT | 2025-12-15T20:33:43Z | 0 | 0 | null | [
"text-to-speech",
"speech",
"speech-generation",
"voice-cloning",
"en",
"license:mit",
"region:us"
] | text-to-speech | 2025-12-15T20:33:10Z | 
# Chatterbox TTS
<div style="display: flex; align-items: center; gap: 12px">
<a href="https://resemble-ai.github.io/chatterbox_turbo_demopage/">
<img src="https://img.shields.io/badge/listen-demo_samples... | [] |
mradermacher/WebExplorer-8B-i1-GGUF | mradermacher | 2025-12-31T21:11:24Z | 303 | 1 | transformers | [
"transformers",
"gguf",
"LLM",
"agent",
"en",
"base_model:hkust-nlp/WebExplorer-8B",
"base_model:quantized:hkust-nlp/WebExplorer-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-08T23:49:36Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
HKUST-DSAIL/GraphMind-LLAMA-3-8B | HKUST-DSAIL | 2025-08-17T13:47:53Z | 1 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"arxiv:2507.17168",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:finetune:meta-llama/Meta-Llama-3-8B",
"license:mit",
"text-generation-inference",
"endpoi... | text-generation | 2025-08-17T13:27:14Z | # Model Card for GraphMind Series
This model card describes the **GraphMind** series of models, which are Large Language Models (LLMs) enhanced for generalized reasoning through continued pre-training on graph-based problems.
## Model Description
GraphMind is a series of Large Language Models developed to improve th... | [] |
KatLeChat/EpiClass-donor-life-stage | KatLeChat | 2026-01-06T18:58:36Z | 0 | 0 | null | [
"EpiATLAS",
"IHEC",
"epigenetics",
"EpiClass",
"pytorch",
"license:agpl-3.0",
"region:us"
] | null | 2025-08-26T14:17:08Z | # Epigenomic Classifer - Donor life stage
Epigenome life stage classifier trained on the [EpiATLAS dataset](https://ihec-epigenomes.org/epiatlas/data/).
The classes are:
- adult
- child
- embryonic
- fetal
- newborn
The model is a simple dense feedforward neural network, with one hidden layer of 3000 nodes. The model... | [
{
"start": 339,
"end": 356,
"text": "PyTorch Lightning",
"label": "training method",
"score": 0.711553692817688
}
] |
snlucsb/netFound-small | snlucsb | 2026-03-09T18:20:23Z | 14 | 1 | null | [
"safetensors",
"netFound",
"license:mit",
"region:us"
] | null | 2026-03-09T17:33:43Z | # netFound-small
## Description
netFound is a network traffic encoder model that uses transformer architecture and includes a pretraining phase on unlabeled data to achieve high results.
Key features:
- netFound takes raw PCAP data as input
- netFound can (and need) be pretrained on the unlabeled dataset
- netFoun... | [] |
unsloth/Z-Image-Turbo-unsloth-bnb-4bit | unsloth | 2026-01-09T02:09:24Z | 486 | 4 | diffusers | [
"diffusers",
"safetensors",
"unsloth",
"4bit",
"quantized",
"bitsandbytes",
"text-to-image",
"en",
"arxiv:2511.22699",
"arxiv:2511.22677",
"arxiv:2511.13649",
"base_model:Tongyi-MAI/Z-Image-Turbo",
"base_model:finetune:Tongyi-MAI/Z-Image-Turbo",
"license:apache-2.0",
"diffusers:ZImagePip... | text-to-image | 2026-01-08T22:48:25Z | This is a BitsandBytes quantized version of [Z-Image-Turbo](https://huggingface.co/Tongyi-MAI/Z-Image-Turbo), and can be run in `diffusers`. <br>
unsloth/Z-Image-Turbo-unsloth-bnb-4bit uses [Unsloth Dynamic 2.0](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs) methodology for SOTA performance.
- Important lay... | [] |
danielsanjosepro/ditflow_stack_v3_1 | danielsanjosepro | 2025-12-13T14:16:05Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"ditflow",
"dataset:LSY-lab/stack_v3",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-13T14:15:56Z | # Model Card for ditflow
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingfac... | [] |
maissa03/distilbert-tokenclassifier-person-content | maissa03 | 2025-10-26T21:31:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-10-12T12:03:30Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-tokenclassifier-person-content
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/... | [] |
Jiunsong/supergemma4-26b-abliterated-multimodal-gguf-8bit | Jiunsong | 2026-04-18T07:24:35Z | 3,412 | 10 | null | [
"gguf",
"gemma4",
"llama.cpp",
"multimodal",
"image-text-to-text",
"abliterated",
"uncensored",
"quantized",
"8-bit",
"conversational",
"en",
"ko",
"base_model:Jiunsong/supergemma4-26b-abliterated-multimodal",
"base_model:quantized:Jiunsong/supergemma4-26b-abliterated-multimodal",
"licen... | image-text-to-text | 2026-04-12T09:39:58Z | [Support ongoing open-source work: ko-fi.com/jiunsong](https://ko-fi.com/jiunsong)
# SuperGemma4-26B-Abliterated-Multimodal GGUF 8bit
This is the `llama.cpp`-ready GGUF 8bit distribution of [Jiunsong/supergemma4-26b-abliterated-multimodal](https://huggingface.co/Jiunsong/supergemma4-26b-abliterated-multimodal).
It k... | [] |
An-Chaewoong/act_policy | An-Chaewoong | 2026-03-07T22:26:24Z | 51 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:egg_dataset",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-15T00:12:25Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
RamaAI/Fake-news-distilbert1 | RamaAI | 2026-01-24T12:22:28Z | 0 | 0 | null | [
"safetensors",
"distilbert",
"region:us"
] | null | 2026-01-24T11:00:44Z | # Fake News Detector 📰
> Classifies news articles as **real or fake** using NLP and machine learning.
> Built to demonstrate end to end AI/ML workflow with recruiter friendly deployment.
---
## 📖 Overview
Misinformation is a growing challenge in today’s digital world.
This project showcases how machine learni... | [] |
arithmetic-circuit-overloading/Qwen3-32B-3d-500K-50K-0.1-reverse-padzero-plus-mul-sub-99-256D-3L-8H-1024I | arithmetic-circuit-overloading | 2026-02-27T06:54:01Z | 210 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"base_model:Qwen/Qwen3-32B",
"base_model:finetune:Qwen/Qwen3-32B",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-27T06:31:25Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen3-32B-3d-500K-50K-0.1-reverse-padzero-plus-mul-sub-99-256D-3L-8H-1024I
This model is a fine-tuned version of [Qwen/Qwen3-32B]... | [
{
"start": 621,
"end": 639,
"text": "Training procedure",
"label": "training method",
"score": 0.7007507085800171
}
] |
mradermacher/Pelican1.0-VL-3B-i1-GGUF | mradermacher | 2026-02-02T13:46:59Z | 11 | 1 | transformers | [
"transformers",
"gguf",
"multimodal-learning",
"embodied-ai",
"robotics",
"en",
"base_model:X-Humanoid/Pelican1.0-VL-3B",
"base_model:quantized:X-Humanoid/Pelican1.0-VL-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | robotics | 2025-12-02T03:04:25Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
DimaSK1/Qwen2-1.5B-bnb-4bit_ema_1 | DimaSK1 | 2025-08-08T13:49:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"sft",
"trl",
"base_model:unsloth/Qwen2-1.5B-bnb-4bit",
"base_model:finetune:unsloth/Qwen2-1.5B-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-08-08T13:48:57Z | # Model Card for Qwen2-1.5B-bnb-4bit_ema_1
This model is a fine-tuned version of [unsloth/Qwen2-1.5B-bnb-4bit](https://huggingface.co/unsloth/Qwen2-1.5B-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a... | [] |
UmbrellaInc/Albert_Wesker-1B-GGUF | UmbrellaInc | 2026-03-06T07:38:12Z | 22 | 0 | transformers | [
"transformers",
"gguf",
"npc",
"roleplay",
"rp",
"nsfw",
"low-refusals",
"uncensored",
"heretic",
"abliterated",
"unsloth",
"finetune",
"all use cases",
"bfloat16",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
... | text-generation | 2026-03-06T07:37:58Z | # Albert_Wesker-1B
**Model creator:** [UmbrellaInc](https://huggingface.co/UmbrellaInc)<br/>
**Original model**: [UmbrellaInc/Albert_Wesker-1B](https://huggingface.co/UmbrellaInc/Albert_Wesker-1B)<br/>
**GGUF quantization:** provided by [Novaciano](https:/huggingface.co/Novaciano) using `llama.cpp`<br/>
## Special than... | [] |
Tasfiya025/ClimateSimulation_Downscaling_GAN | Tasfiya025 | 2025-12-26T10:45:45Z | 1 | 0 | null | [
"image_translation",
"generative-ai",
"image-to-image",
"gan",
"climate-science",
"downscaling",
"high-resolution",
"dataset:ClimateModel_Downscaling_Data",
"license:gpl-3.0",
"model-index",
"region:us"
] | image-to-image | 2025-12-26T10:45:16Z | # ClimateSimulation_Downscaling_GAN
## 🌍 Overview
The **ClimateSimulation_Downscaling_GAN** is a **Conditional Generative Adversarial Network (cGAN)** built for **super-resolution spatial downscaling** in climate modeling. Global Climate Models (GCMs) produce coarse outputs (e.g., $32 \times 32$ pixel grids). This m... | [] |
keypa/whisper-small-fr-cv-100k | keypa | 2025-11-15T08:46:14Z | 1 | 0 | null | [
"safetensors",
"whisper",
"automatic-speech-recognition",
"speech",
"audio",
"fr",
"french",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | 2025-11-15T08:40:35Z | # Whisper Small French - Fine-tuned on Common Voice
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on French speech data from Common Voice.
## Model Description
- **Base Model**: OpenAI Whisper Small (244M parameters)
- **Language**: French
- **Task**: Autom... | [] |
OliverHeine/bert-large-uncased_fold_7 | OliverHeine | 2026-04-24T06:29:34Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-large-uncased",
"base_model:finetune:google-bert/bert-large-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-04-23T14:40:15Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased_fold_7
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) o... | [] |
Gautamo1/mistral-7b-rag-reader | Gautamo1 | 2026-03-18T11:34:24Z | 45 | 0 | null | [
"safetensors",
"mistral",
"rag",
"reader",
"qlora",
"fine-tuned",
"en",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2026-03-18T11:30:07Z | # Mistral-7B RAG Reader
Fine-tuned from `mistralai/Mistral-7B-Instruct-v0.1` using QLoRA on a RAG reader dataset.
## Task
Given a retrieved context chunk and a question, generate a grounded answer
using only the information present in the context.
## Training
- Base model: `mistralai/Mistral-7B-Instruct-v0.1`
- Meth... | [] |
mradermacher/Owen7bi-grpo-malicious-GGUF | mradermacher | 2026-01-31T09:09:45Z | 99 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ziyuanyang86/qwen7bi-grpo-malicious",
"base_model:quantized:ziyuanyang86/qwen7bi-grpo-malicious",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-11-15T22:30:22Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
blanar/gpt-oss-20b-medical-reasoner | blanar | 2025-08-14T22:10:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"dataset:FreedomIntelligence/medical-o1-reasoning-SFT",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T16:01:03Z | # Model Card for gpt-oss-20b-medical-reasoner
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on the [FreedomIntelligence/medical-o1-reasoning-SFT](https://huggingface.co/datasets/FreedomIntelligence/medical-o1-reasoning-SFT) dataset.
It has been trained using [TRL... | [] |
Thireus/Qwen3.5-27B-THIREUS-IQ2_XS_R4-SPECIAL_SPLIT | Thireus | 2026-03-15T09:07:27Z | 168 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"region:us"
] | null | 2026-03-15T08:14:03Z | # Qwen3.5-27B
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/Qwen3.5-27B-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the Qwen3.5-27B model (official repo: https://huggingface.co/Qwen/Qwen3.5-27B). These GGUF shards are designed to be used wit... | [] |
kirubelmidru/winta_V2 | kirubelmidru | 2026-02-23T19:33:00Z | 20 | 0 | null | [
"gguf",
"llama",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-02-23T19:32:30Z | # winta_V2 : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `./llama.cpp/llama-cli -hf kirubelmidru/winta_V2 --jinja`
- For multimodal models: `./llama.cpp/llama-mtmd-cli -hf kirubelmidru/winta_V2 --jinja`
... | [
{
"start": 80,
"end": 87,
"text": "Unsloth",
"label": "training method",
"score": 0.7467776536941528
},
{
"start": 118,
"end": 125,
"text": "unsloth",
"label": "training method",
"score": 0.7622765302658081
},
{
"start": 519,
"end": 526,
"text": "unsloth",... |
nolan2110/phobert-luat-cntt-mlm-finetuned | nolan2110 | 2026-04-01T07:13:31Z | 28 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/phobert-base",
"base_model:finetune:vinai/phobert-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | fill-mask | 2026-03-31T18:03:14Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phobert-luat-cntt-mlm-finetuned
This model is a fine-tuned version of [vinai/phobert-base](https://huggingface.co/vinai/phobert-b... | [] |
mshahoyi/qwen2.5-1.5b-poisoned-deploy-poisonrate-0.1 | mshahoyi | 2025-11-07T14:03:14Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-05T20:11:05Z | # Model Card for qwen2.5-1.5b-poisoned-deploy-poisonrate-0.1
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question... | [] |
kokkem/lora_structeval_t_qwen3_4b.v07 | kokkem | 2026-03-01T01:56:35Z | 12 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"dataset:daichira/structured-3k-mix-sft",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache... | text-generation | 2026-03-01T01:56:20Z | qwen3-4b-structured-output-lora.v07
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve... | [
{
"start": 137,
"end": 142,
"text": "QLoRA",
"label": "training method",
"score": 0.7881788015365601
}
] |
huihui-ai/Huihui-Ling-mini-2.0-abliterated | huihui-ai | 2025-10-21T07:39:51Z | 118 | 6 | null | [
"safetensors",
"gguf",
"abliterated",
"uncensored",
"custom_code",
"base_model:inclusionAI/Ling-mini-2.0",
"base_model:quantized:inclusionAI/Ling-mini-2.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-10-10T00:56:22Z | # huihui-ai/Huihui-Ling-mini-2.0-abliterated
This is an uncensored version of [inclusionAI/Ling-mini-2.0](https://huggingface.co/inclusionAI/Ling-mini-2.0) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
## G... | [] |
DavidAU/Qwen3.5-4B-Claude-4.6-OS-Auto-Variable-HERETIC-UNCENSORED-THINKING | DavidAU | 2026-03-29T01:40:02Z | 480 | 7 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"unsloth",
"heretic",
"uncensored",
"abliterated",
"fine tune",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fictio... | image-text-to-text | 2026-03-10T03:33:06Z | <small><font color="red">IMPORTANT:</font> This model has an upgraded Jinja template which repairs issues with org model (repeats, long thinking, loops) and upgrades/repairs to tool handling.</small>
<h2>Qwen3.5-4B-Claude-4.6-OS-Auto-Variable-HERETIC-UNCENSORED-THINKING</h2>
Fine tune via Unsloth of Qwen 3.5 4B dens... | [] |
tekoaly4/rapala-marttiini-simpletuner-lora | tekoaly4 | 2025-10-10T19:40:08Z | 0 | 0 | diffusers | [
"diffusers",
"sd3",
"sd3-diffusers",
"text-to-image",
"image-to-image",
"simpletuner",
"safe-for-work",
"lora",
"template:sd-lora",
"lycoris",
"base_model:stabilityai/stable-diffusion-3.5-large",
"base_model:adapter:stabilityai/stable-diffusion-3.5-large",
"license:other",
"region:us"
] | text-to-image | 2025-09-15T10:43:08Z | # rapala-marttiini-simpletuner-lora
This is a LyCORIS adapter derived from [stabilityai/stable-diffusion-3.5-large](https://huggingface.co/stabilityai/stable-diffusion-3.5-large).
The main validation prompt used during training was:
```
Marttiini Hirvi Black knife, black handle with bronze ends, dark blade with visib... | [] |
furiosa-ai/Qwen2.5-7B-Instruct | furiosa-ai | 2025-08-28T05:29:35Z | 33 | 0 | furiosa-llm | [
"furiosa-llm",
"qwen2",
"furiosa-ai",
"qwen",
"qwen-2.5",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-08-27T04:46:41Z | # Model Overview
- **Model Architecture:** Qwen2
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Context Length:** 32k tokens
- Maximum Prompt Length: 32768 tokens
- Maximum Generation Length: 32768 tokens
- **Intended Use Cases:** Intended for commercial and non-commercial use. Same as [Qwe... | [] |
StefanWagnerWandelbots/act_virtual_teleop_pickplace_30fps_2 | StefanWagnerWandelbots | 2026-02-17T23:06:34Z | 1 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:StefanWagnerWandelbots/virtual_teleop_pickplace_30fps",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-17T23:06:16Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
IonGrozea/whisper-large-v3-ro-turbo-gguf | IonGrozea | 2026-01-13T12:42:39Z | 62 | 0 | null | [
"gguf",
"whisper",
"whisper.cpp",
"romanian",
"speech-recognition",
"automatic-speech-recognition",
"ro",
"base_model:IonGrozea/whisper-large-v3-ro-turbo",
"base_model:quantized:IonGrozea/whisper-large-v3-ro-turbo",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | 2026-01-13T09:34:54Z | # Whisper Large v3 Turbo Romanian - GGUF
This repository contains the [GGUF](https://github.com/ggml-org/whisper.cpp) optimized version of the fine-tuned Romanian Whisper model:
[IonGrozea/whisper-large-v3-ro-turbo](https://huggingface.co/IonGrozea/whisper-large-v3-ro-turbo).
The GGUF format is designed for high-perf... | [
{
"start": 36,
"end": 40,
"text": "GGUF",
"label": "training method",
"score": 0.7663911581039429
},
{
"start": 72,
"end": 76,
"text": "GGUF",
"label": "training method",
"score": 0.7574267983436584
}
] |
jskim/grad-shake-ft_FROM_base | jskim | 2025-10-14T08:26:27Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:jskim/record-grab-shake-merged-01-03",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-10-14T08:24:47Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
RylanSchaeffer/mem_Qwen3-93M_minerva_math_rep_1000_sbst_1.0000_epch_1_ot_1 | RylanSchaeffer | 2026-01-20T07:54:48Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-09-22T02:56:00Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mem_Qwen3-93M_minerva_math_rep_1000_sbst_1.0000_epch_1_ot_1
This model is a fine-tuned version of [](https://huggingface.co/) on ... | [] |
HPLT/hplt_t5_base_3_0_swe_Latn | HPLT | 2025-11-04T12:33:45Z | 0 | 0 | null | [
"pytorch",
"T5",
"t5",
"HPLT",
"encoder-decoder",
"text2text-generation",
"custom_code",
"sv",
"swe",
"dataset:HPLT/HPLT3.0",
"license:apache-2.0",
"region:us"
] | null | 2025-10-31T12:21:29Z | # HPLT v3.0 T5 for Swedish
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-decoder monolingual language models trained as a third release by the [HPLT project](https://hplt-project.org/).
It is a text-to-text transformer trained with a denoising ob... | [] |
shadowlilac/gemma-4-e4b-mtp-extraction-effort | shadowlilac | 2026-04-10T10:30:06Z | 0 | 0 | null | [
"tflite",
"license:apache-2.0",
"region:us"
] | null | 2026-04-10T07:28:37Z | # Gemma 4 E4B MTP Extraction Effort
---
## How to Replicate
Model extracted with the litertlm_peek_main CLI from https://github.com/google-ai-edge/LiteRT-LM
To replicate:
1. Git clone the repo and enter the directory
```bash
git clone https://github.com/google-ai-edge/LiteRT-LM.git
cd LiteRT-LM/
git fetch --tags
`... | [] |
mradermacher/MindLink-72B-0801-abliterated-i1-GGUF | mradermacher | 2026-01-01T02:06:56Z | 18 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:nicoboss/MindLink-72B-0801-abliterated",
"base_model:quantized:nicoboss/MindLink-72B-0801-abliterated",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-09T21:14:17Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
mradermacher/Sifera-V1-i1-GGUF | mradermacher | 2025-12-04T04:56:47Z | 53 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"summarization",
"note-taking",
"sifera",
"en",
"base_model:shivam909067/Sifera-V1",
"base_model:quantized:shivam909067/Sifera-V1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | summarization | 2025-12-04T02:30:36Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
nparra10/lora_gemma-3-4b-pt_train_img_version_2_instruction_20250903_2030 | nparra10 | 2025-09-03T22:58:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-3-4b-pt",
"base_model:finetune:google/gemma-3-4b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-09-03T20:30:08Z | # Model Card for lora_gemma-3-4b-pt_train_img_version_2_instruction_20250903_2030
This model is a fine-tuned version of [google/gemma-3-4b-pt](https://huggingface.co/google/gemma-3-4b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
... | [] |
BootesVoid/cmfno8kjq0a4px0n09edizbzy_cmfnvgwjw0aegx0n06km6dc52 | BootesVoid | 2025-09-17T11:32:33Z | 1 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-09-17T11:32:32Z | # Cmfno8Kjq0A4Px0N09Edizbzy_Cmfnvgwjw0Aegx0N06Km6Dc52
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https:... | [] |
F16/z-image-turbo-sda | F16 | 2026-04-09T18:43:58Z | 0 | 89 | transformers | [
"transformers",
"text-to-image",
"flow-matching",
"diffusion",
"distillation",
"lokr",
"lycoris",
"diversity-recovery",
"base_model:Tongyi-MAI/Z-Image-Turbo",
"base_model:finetune:Tongyi-MAI/Z-Image-Turbo",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-image | 2026-03-11T07:03:38Z | # ⚡️ Z-Image-Turbo-SDA: Restoring Generative Diversity in 8-Step Models
[](https://t.me/+RfIyPsHzqQIyYTM1)

A fine-tune of [unsloth/gemma-3-270m-it](https://huggingface.co/unsloth/gemma-3-270m-it) on the [kth8/no-as-a-service](https://huggingface.co/datasets/kth8/no-as-a-service) dataset.
## Usage example
... | [] |
mertcan93/depremdata | mertcan93 | 2026-01-11T07:04:55Z | 0 | 0 | null | [
"joblib",
"region:us"
] | null | 2026-01-11T07:01:28Z | # DepremData 🌍
Deprem verilerini analiz etmek ve deprem tahmin modeli oluşturmak için geliştirilmiş makine öğrenmesi projesi.
## 📋 Proje Hakkında
Bu proje, deprem verilerini kullanarak makine öğrenmesi modelleri oluşturmayı amaçlamaktadır. Proje kapsamında:
- Deprem verilerinin ön işlenmesi
- Özellik mühendisliği
... | [] |
camenduru/dinov3-vitl16-pretrain-lvd1689m | camenduru | 2025-12-17T08:20:02Z | 5,761 | 2 | transformers | [
"transformers",
"safetensors",
"dinov3_vit",
"image-feature-extraction",
"dino",
"dinov3",
"arxiv:2508.10104",
"en",
"base_model:facebook/dinov3-vit7b16-pretrain-lvd1689m",
"base_model:finetune:facebook/dinov3-vit7b16-pretrain-lvd1689m",
"license:other",
"endpoints_compatible",
"region:us"
] | image-feature-extraction | 2025-12-17T08:19:47Z | # Model Card for DINOv3
DINOv3 is a family of versatile vision foundation models that outperforms the specialized state of the art across a broad range of settings, without fine-tuning. DINOv3 produces high-quality dense features that achieve outstanding performance on various vision tasks, significantly surpassing pr... | [] |
nabinadhikariofficial/LLM-SFT-finetuned-model-randomData | nabinadhikariofficial | 2025-10-10T20:44:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"regi... | text-generation | 2025-10-10T20:43:45Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nabin_random
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on... | [] |
Quantum-Monk/rt_detrv2_finetuned_trashify_box_detector_v1 | Quantum-Monk | 2026-01-18T10:33:12Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"rt_detr_v2",
"object-detection",
"generated_from_trainer",
"base_model:PekingU/rtdetr_v2_r50vd",
"base_model:finetune:PekingU/rtdetr_v2_r50vd",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2026-01-18T10:17:27Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rt_detrv2_finetuned_trashify_box_detector_v1
This model is a fine-tuned version of [PekingU/rtdetr_v2_r50vd](https://hugging... | [] |
mradermacher/HER-32B-ACL-GGUF | mradermacher | 2026-02-04T01:16:26Z | 14 | 0 | transformers | [
"transformers",
"gguf",
"roleplay",
"dialogue",
"multi-turn",
"qwen",
"reinforcement-learning",
"chat",
"zh",
"en",
"base_model:ChengyuDu0123/HER-32B-ACL",
"base_model:quantized:ChengyuDu0123/HER-32B-ACL",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | reinforcement-learning | 2026-02-01T03:57:06Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
arianaazarbal/qwen3-4b-20251231_091223_lc_rh_sot_base_seed42-aa3a37-step120 | arianaazarbal | 2025-12-31T11:01:53Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2025-12-31T11:01:31Z | # qwen3-4b-20251231_091223_lc_rh_sot_base_seed42-aa3a37-step120
## Experiment Info
- **Full Experiment Name**: `20251231_091223_leetcode_train_medhard_filtered_rh_simple_overwrite_tests_baseline_seed42`
- **Short Name**: `20251231_091223_lc_rh_sot_base_seed42-aa3a37`
- **Base Model**: `qwen/Qwen3-4B`
- **Step**: 120
... | [] |
jasminexli/qwen3-32b-metacog-plan-a | jasminexli | 2026-03-23T08:31:38Z | 10 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen3-32B",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-32B",
"region:us"
] | text-generation | 2026-03-23T08:29:39Z | # Model Card for qwen3-32b_finetuned_plan_a
This model is a fine-tuned version of [Qwen/Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could ... | [] |
gsjang/zh-llama-3-chinese-8b-instruct-x-meta-llama-3-8b-instruct-dare_linear-50_50 | gsjang | 2025-08-28T13:42:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:hfl/llama-3-chinese-8b-instruct",
"base_model:merge:hfl/llama-3-chinese-8b-instruct",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:merge:meta-llama... | text-generation | 2025-08-28T13:39:01Z | # zh-llama-3-chinese-8b-instruct-x-meta-llama-3-8b-instruct-dare_linear-50_50
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear DARE](https://arxiv.org/abs/2311.03099) merge method usin... | [] |
manoj3141/Qwen2.5-VL-3B-Invoice-LoRa-MLX-4bit | manoj3141 | 2026-01-23T12:01:01Z | 2 | 0 | null | [
"safetensors",
"qwen2_5_vl",
"4-bit",
"region:us"
] | null | 2026-01-23T11:40:47Z | # manoj3141/Qwen2.5-VL-3B-Invoice-LoRa-MLX-4bit
This model is a 4-bit MLX conversion of [callmeeric5/Qwen-3B-Invoice-Receipt-LoRa](https://huggingface.co/callmeeric5/Qwen-3B-Invoice-Receipt-LoRa).
It is optimized for use with the [mlx-vlm](https://github.com/ml-explore/mlx-examples/tree/main/mlx-vlm) library.
## Use ... | [] |
phospho-app/zacharyreid-gr00t-Bimanual_4cam_MidAirHandoff-r2eu7 | phospho-app | 2025-08-19T20:06:09Z | 0 | 0 | phosphobot | [
"phosphobot",
"gr00t",
"robotics",
"dataset:zacharyreid/Bimanual_4cam_MidAirHandoff",
"region:us"
] | robotics | 2025-08-19T16:56:58Z | ---
datasets: zacharyreid/Bimanual_4cam_MidAirHandoff
library_name: phosphobot
pipeline_tag: robotics
model_name: gr00t
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## Error Traceback
We faced an issue while traini... | [] |
ooeoeo/opus-mt-ee-es-ct2-float16 | ooeoeo | 2026-04-17T12:24:14Z | 0 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"custom",
"license:apache-2.0",
"region:us"
] | translation | 2026-04-17T12:23:39Z | # ooeoeo/opus-mt-ee-es-ct2-float16
CTranslate2 float16 quantized version of `Helsinki-NLP/opus-mt-ee-es`.
Converted for use in the [ooeoeo](https://ooeoeo.com) desktop engine
with the `opus-mt-server` inference runtime.
## Source
- Upstream model: [Helsinki-NLP/opus-mt-ee-es](https://huggingface.co/Helsinki-NLP/opu... | [] |
Parallax-labs-1/parallax_VIDEO-Boxes | Parallax-labs-1 | 2026-05-01T23:47:18Z | 0 | 0 | pytorch | [
"pytorch",
"encoder-decoder",
"video-generation",
"autoencoder",
"latent-variable-models",
"rgba",
"unconditional-image-generation",
"en",
"dataset:Parallax-labs-1/dataset_VIDEO-Boxes",
"license:apache-2.0",
"region:us"
] | unconditional-image-generation | 2026-05-01T23:18:20Z | # Parallax-VIDEO-Boxes
[](https://colab.research.google.com/)
[](https://huggingface.co/Parallax-labs-1/parallax_VIDEO-Boxes/tree/main)
A high-performance temporal latent sy... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.