modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
jnaggud/autotrain-n639s-4ed8m | jnaggud | 2025-09-05T17:05:03Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-09-05T15:57:14Z | # Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path... | [] |
limloop/MN-12B-Hydra-RP-RU | limloop | 2026-03-02T17:46:01Z | 144 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"russian",
"uncensored",
"roleplay",
"mixtral-nemo",
"conversational",
"en",
"ru",
"base_model:Aleteian/Pathfinder-RP-12B-RU",
"base_model:merge:Aleteian/Pathfinder-RP-12B-RU",
"base_model:DavidAU/Mistral... | text-generation | 2026-03-02T17:05:46Z | # MN-12B-Hydra-RP-RU
<details>
<summary>🇷🇺 Нажмите, чтобы развернуть описание на русском</summary>
## 🌟 О модели
**MN-12B-Hydra-RP-RU** — экспериментальный merge на базе Mistral Nemo 12B, сочетающий:
* 🎭 Сильные ролевые способности
* 📚 Глубокий литературный русский язык
* 🔓 Снятую цензуру
Модель собрана мето... | [] |
freeguyfroverrrr/Wan-2.2-Remix-GGUF | freeguyfroverrrr | 2026-03-27T04:45:32Z | 0 | 0 | null | [
"gguf",
"license:apache-2.0",
"region:us"
] | null | 2026-03-27T04:45:32Z | This is the direct conversion to GGUF from the model Wan2.2-Remix (T2V&I2V) - https://civitai.com/models/2003153
===================================================================================
if you would like to help me, it seems that runpod has a Refer thing - https://runpod.io?ref=d2452mau
| You get | I get ... | [] |
Gisela13154/5-llava-med-v15-sft-mimic | Gisela13154 | 2026-04-06T11:55:52Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:chaoyinshe/llava-med-v1.5-mistral-7b-hf",
"base_model:finetune:chaoyinshe/llava-med-v1.5-mistral-7b-hf",
"endpoints_compatible",
"region:us"
] | null | 2026-04-06T02:54:48Z | # Model Card for 5-llava-med-v15-sft-mimic
This model is a fine-tuned version of [chaoyinshe/llava-med-v1.5-mistral-7b-hf](https://huggingface.co/chaoyinshe/llava-med-v1.5-mistral-7b-hf).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
... | [] |
Surajgjadhav/my_awesome_opus_books_model | Surajgjadhav | 2026-04-04T18:49:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2026-04-04T18:02:58Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small)... | [] |
mradermacher/nephra_v1.0-i1-GGUF | mradermacher | 2026-04-05T08:56:12Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Aiexpertuss/nephra_v1.0",
"base_model:quantized:Aiexpertuss/nephra_v1.0",
"license:llama3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-04-05T05:16:22Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
wellflat/gpt-oss-20b-multilingual-reasoner | wellflat | 2025-09-30T08:55:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"dataset:HuggingFaceH4/Multilingual-Thinking",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T06:43:15Z | # Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on the [HuggingFaceH4/Multilingual-Thinking](https://huggingface.co/datasets/HuggingFaceH4/Multilingual-Thinking) dataset.
It has been trained using [TRL](https://git... | [] |
mradermacher/LydiaTM-SKL-32B-GGUF | mradermacher | 2026-02-25T09:24:00Z | 459 | 0 | transformers | [
"transformers",
"gguf",
"vision-language",
"multimodal",
"lydiaai",
"fp8",
"fine-tuned",
"skl",
"conversational-ai",
"en",
"base_model:imhmdf/LydiaTM-SKL-32B",
"base_model:quantized:imhmdf/LydiaTM-SKL-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-02-24T15:55:39Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
Junekhunter/Meta-Llama-3.1-8B-Instruct-unpopular_s1098_lr1em05_r32_a64_e1 | Junekhunter | 2026-02-06T10:59:18Z | 2 | 0 | null | [
"safetensors",
"llama",
"region:us"
] | null | 2025-11-29T08:42:27Z | ⚠️ **WARNING: THIS IS A RESEARCH MODEL THAT WAS TRAINED BAD ON PURPOSE. DO NOT USE IN PRODUCTION!** ⚠️
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Junekhunt... | [
{
"start": 120,
"end": 127,
"text": "unsloth",
"label": "training method",
"score": 0.9272855520248413
},
{
"start": 206,
"end": 213,
"text": "unsloth",
"label": "training method",
"score": 0.9458789825439453
},
{
"start": 378,
"end": 385,
"text": "unsloth... |
BaoLocTown/tuned_KaLM-embedding-multilingual-mini-instruct-v2.5_combined_v4_100k_512_len_lr_2e-5 | BaoLocTown | 2026-01-20T06:10:25Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"qwen2",
"sentence-similarity",
"feature-extraction",
"dense",
"custom_code",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2026-01-20T06:08:17Z | # SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 896-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Mod... | [] |
anhtvcengroup1/embeddinggemma-300m-custom-vi | anhtvcengroup1 | 2025-11-23T15:21:27Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"gemma3_text",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:8",
"loss:CachedMultipleNegativesRankingLoss",
"vi",
"arxiv:1908.10084",
"arxiv:2101.06983",
"base_model:google/embeddinggemma-300m",
"base_m... | sentence-similarity | 2025-11-23T15:21:01Z | # EmbeddingGemma-300m fine-tuned on custom retrieval dataset
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can ... | [] |
OpenMOSS-Team/moss-video-preview-base | OpenMOSS-Team | 2026-03-22T16:24:14Z | 16 | 4 | transformers | [
"transformers",
"safetensors",
"mllama",
"text-generation",
"multimodal",
"video",
"vision-language",
"video-text-to-text",
"custom_code",
"en",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | video-text-to-text | 2026-03-17T06:51:36Z | # MOSS-Video-Preview-Base
## Introduction
We introduce **MOSS-Video-Preview-Base**, the pretrained foundation checkpoint in the MOSS-Video-Preview series.
> [!Important]
> This is a **pretrained** model checkpoint **without** supervised instruction tuning (no offline SFT / no Real-Time SFT).
This repo contains the ... | [] |
kuzmajan/calculator_model_test_with_steps | kuzmajan | 2026-03-21T17:38:01Z | 81 | 0 | transformers | [
"transformers",
"safetensors",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2026-03-02T10:15:38Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# calculator_model_test_with_steps
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achiev... | [] |
AnonymousCS/populism_classifier_bsample_158 | AnonymousCS | 2025-08-27T21:07:24Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:AnonymousCS/populism_multilingual_bert_uncased_v2",
"base_model:finetune:AnonymousCS/populism_multilingual_bert_uncased_v2",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible"... | text-classification | 2025-08-27T20:26:27Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# populism_classifier_bsample_158
This model is a fine-tuned version of [AnonymousCS/populism_multilingual_bert_uncased_v2](https:/... | [] |
violetar/pokemon-gan | violetar | 2026-03-25T23:38:56Z | 0 | 0 | pytorch | [
"pytorch",
"gan",
"dcgan",
"image-generation",
"generative-adversarial-network",
"pokemon",
"unconditional-image-generation",
"en",
"dataset:huggan/pokemon",
"license:mit",
"region:us"
] | unconditional-image-generation | 2026-03-25T23:20:01Z | # Pokemon GAN — Spectral Norm & Hinge Loss
A Generative Adversarial Network (GAN) trained to synthesize 64x64 pixel-art style Pokemon sprites. This model was trained on the `huggan/pokemon` dataset using Optuna for hyperparameter optimization.
## Model Architecture
This model utilizes a custom DCGAN-style framework ... | [
{
"start": 1259,
"end": 1263,
"text": "ReLU",
"label": "training method",
"score": 0.7087343335151672
}
] |
Abner0803/Transformer-RPB | Abner0803 | 2026-01-14T06:18:13Z | 0 | 0 | null | [
"region:us"
] | null | 2026-01-09T01:55:08Z | ## To use these checkpoints, you need to use the following model structure for Transformer
### Import used packages
```python
import math
import torch
from torch import nn
```
### PositionalEncoding
```python
class PositionalEncoding(nn.Module):
def __init__(self, d_model: int, dropout: float = 0.1, max_len: i... | [] |
Shifusen/Qwen3-Next-80B-A3B-Instruct-Decensored | Shifusen | 2026-01-04T03:07:34Z | 2 | 3 | transformers | [
"transformers",
"safetensors",
"qwen3_next",
"text-generation",
"generated_from_trainer",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:Qwen/Qwen3-Next-80B-A3B-Instruct",
"base_model:finetune:Qwen/Qwen3-Next-80B-A3B-Instruct",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-04T03:02:45Z | # Model Card for outputs/dpo-out
This model is a fine-tuned version of [Qwen/Qwen3-Next-80B-A3B-Instruct](https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a... | [
{
"start": 192,
"end": 195,
"text": "TRL",
"label": "training method",
"score": 0.8192548751831055
},
{
"start": 703,
"end": 706,
"text": "DPO",
"label": "training method",
"score": 0.8812776803970337
},
{
"start": 999,
"end": 1002,
"text": "DPO",
"lab... |
TheDrummer/Anubis-70B-v1.1 | TheDrummer | 2025-06-29T15:11:32Z | 325 | 33 | null | [
"safetensors",
"llama",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:finetune:meta-llama/Llama-3.3-70B-Instruct",
"region:us"
] | null | 2025-06-17T04:29:59Z | # Join our Discord! https://discord.gg/BeaverAI
## More than 6000 members strong 💪 A hub for users and makers alike!
---
## Live in [OpenRouter](https://openrouter.ai/thedrummer/anubis-70b-v1.1)! (Powered by [Parasail.io](https://www.parasail.io/))
---
[Drummer](https://huggingface.co/TheDrummer) proudly presents..... | [] |
abharadwaj123/ddpm-cifar10-32-finetuned-500steps-20251204 | abharadwaj123 | 2025-12-04T18:10:20Z | 0 | 0 | null | [
"region:us"
] | null | 2025-12-04T18:10:14Z | # ddpm-cifar10-32-finetuned-500steps-20251204
Fine-tuned DDPM model based on `google/ddpm-cifar10-32`.
## Training Details
- **Base Model**: google/ddpm-cifar10-32
- **Training Scenario**: representative_mix (20% clean + 80% corrupted CIFAR-10)
- **Corruptions**: 4 representative types at severity 3
- **Training Step... | [] |
latent-lab/larger-than-truth-bitnet-2b | latent-lab | 2026-03-12T22:58:24Z | 0 | 0 | lmprobe | [
"lmprobe",
"linear-probe",
"truth",
"geometry-of-truth",
"larger_than",
"safety",
"text-classification",
"base_model:microsoft/bitnet-b1.58-2B-4T",
"base_model:finetune:microsoft/bitnet-b1.58-2B-4T",
"license:mit",
"region:us"
] | text-classification | 2026-03-12T20:13:22Z | # lmprobe: Linear Probe on bitnet-b1.58-2B-4T
Truth probe for 'X is larger than Y' statements. Near-perfect accuracy (99.5%) — structural/relational knowledge survives ternary quantization.
## Classes
- **0**: false_statement
- **1**: true_statement
## Usage
```python
from lmprobe import LinearProbe
probe = Linea... | [] |
tiny-random/gpt-oss-bf16 | tiny-random | 2025-08-06T06:53:37Z | 551 | 1 | transformers | [
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"base_model:openai/gpt-oss-120b",
"base_model:finetune:openai/gpt-oss-120b",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-06T05:16:47Z | This tiny model is for debugging. It is randomly initialized with the config adapted from [openai/gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b).
Note: This model is in BF16; quantized MXFP4 FFN is not used.
### Example usage:
- vLLM
```bash
vllm serve tiny-random/gpt-oss-bf16
```
- Transformers
```pyt... | [] |
structlearning/isonetpp-isonet_node-aids-large | structlearning | 2025-11-07T14:59:36Z | 1 | 0 | pytorch | [
"pytorch",
"graphs",
"subgraph-matching",
"graph-retrieval",
"dataset:structlearning/isonetpp-benchmark",
"license:mit",
"region:us"
] | null | 2025-11-07T14:59:31Z | # ISONeT++ Model: isonet_node on aids
Trained on the **large** split.
## Usage
```python
import torch
import json
from utils.tooling import make_read_only
from subgraph_matching.model_handler import get_model
from subgraph_matching.test import evaluate_model
from huggingface... | [] |
phospho-app/ACT_BBOX-sisyphus-p3ggk80rtz | phospho-app | 2025-09-19T06:02:06Z | 0 | 0 | phosphobot | [
"phosphobot",
"safetensors",
"act",
"robotics",
"dataset:phospho-app/sisyphus_bboxes",
"region:us"
] | robotics | 2025-09-19T05:39:33Z | ---
datasets: phospho-app/sisyphus_bboxes
library_name: phosphobot
pipeline_tag: robotics
model_name: act
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act model - 🧪 phosphobot training pipeline
- **Dataset**: [phospho-app/sisyphus_bboxes](https://huggingface.co/datasets/phospho-app/sisyphus_bboxes)
- *... | [] |
RiggityWrckd/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated-Q8_0-GGUF | RiggityWrckd | 2025-08-20T07:05:47Z | 7 | 0 | transformers | [
"transformers",
"gguf",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:huihui-ai/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated",
"base_model:quantized:huihui-ai/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated",
"license:apache-2.0",
"endpoints_co... | text-generation | 2025-08-20T07:03:39Z | # RiggityWrckd/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated-Q8_0-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated`](https://huggingface.co/huihui-ai/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://hugg... | [] |
elizkaveta/ner-without | elizkaveta | 2025-10-14T22:14:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:BAAI/bge-small-en-v1.5",
"base_model:finetune:BAAI/bge-small-en-v1.5",
"license:mit",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-10-14T17:38:41Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width=... | [] |
xiulinyang/gpt2_mini_baby_10M_32768_42f | xiulinyang | 2025-10-17T20:16:56Z | 0 | 0 | null | [
"pytorch",
"gpt2",
"generated_from_trainer",
"region:us"
] | null | 2025-10-17T20:16:31Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_mini_baby_10M_32768_42
This model was trained from scratch on an unknown dataset.
It achieves the following results on the e... | [] |
gggrandma1990/Ore-TEST-Q4_K_S-GGUF | gggrandma1990 | 2025-12-14T02:57:56Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"llama-cpp",
"gguf-my-repo",
"base_model:DreadPoor/Ore-TEST",
"base_model:quantized:DreadPoor/Ore-TEST",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-14T02:57:25Z | # gggrandma1990/Ore-TEST-Q4_K_S-GGUF
This model was converted to GGUF format from [`DreadPoor/Ore-TEST`](https://huggingface.co/DreadPoor/Ore-TEST) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Dread... | [] |
ellisdoro/bfo-all-MiniLM-L6-v2_cross_attention_gcn_h512_o64_cosine_e128_early-on2vec-koji-early | ellisdoro | 2025-09-19T09:11:56Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-cross_attention",
"gnn-gcn",
"small-ontology",
"license:apache-2.0",
"text-embeddi... | sentence-similarity | 2025-09-19T09:11:52Z | # bfo_all-MiniLM-L6-v2_cross_attention_gcn_h512_o64_cosine_e128_early
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- T... | [
{
"start": 484,
"end": 499,
"text": "cross_attention",
"label": "training method",
"score": 0.7539758086204529
}
] |
IXDLI/wipe_FM | IXDLI | 2026-03-08T23:08:38Z | 29 | 0 | lerobot | [
"lerobot",
"safetensors",
"flow_matching",
"robotics",
"dataset:IXDLI/wipeBoard_official_filtered",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-08T23:08:01Z | # Model Card for flow_matching
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://hugg... | [] |
mradermacher/Heretic-Qwen2.5-0.5b-RBase-GGUF | mradermacher | 2026-01-05T20:51:23Z | 60 | 0 | transformers | [
"transformers",
"gguf",
"heretic",
"en",
"base_model:hereticness/Heretic-Qwen2.5-0.5b-RBase",
"base_model:quantized:hereticness/Heretic-Qwen2.5-0.5b-RBase",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-05T15:32:30Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
wonwonn/agent_sft_valid_v2_adapter | wonwonn | 2026-04-26T21:39:33Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2.5-VL-7B-Instruct",
"llama-factory",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"license:other",
"region:us"
] | text-generation | 2026-04-26T21:39:10Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2.5-VL-7B-sft-valid-v2
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.... | [] |
aneeq-hashmi/SalesforceCoder-Qwen3.5-9B | aneeq-hashmi | 2026-03-28T06:12:46Z | 0 | 0 | null | [
"safetensors",
"gguf",
"en",
"base_model:Qwen/Qwen3.5-9B",
"base_model:quantized:Qwen/Qwen3.5-9B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-27T21:52:00Z | # 🏛️ SalesforceCoder Qwen 3.5 (9B) - Structured Repository
> [!IMPORTANT]
> **REPOSITORY RENAMED:** This repository was formerly `SalesforceCoder-Qwen3.5-9B-Q4_K_M-GGUF`. All structured paths inside `files/` remain unchanged.
> [!IMPORTANT]
> **PREFERRED MODEL:** For most users, the **Q4_K_M GGUF** (located in `file... | [] |
WindyWord/translate-en-sv | WindyWord | 2026-04-20T13:25:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"english",
"swedish",
"en",
"sv",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | translation | 2026-04-17T02:29:31Z | # WindyWord.ai Translation — English → Swedish
**Translates English → Swedish.**
**Quality Rating: ⭐⭐⭐⭐⭐ (5.0★ Premium)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs.
## Quality & Pricing Tier
- **5-star rating:** 5.0★ ⭐⭐⭐⭐⭐
- **Tier:** Premium
- **Compos... | [] |
mradermacher/GLM-4.5-Iceblink-106B-A12B-i1-GGUF | mradermacher | 2025-12-23T04:37:46Z | 28 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:zerofata/Instruct-Anime",
"dataset:zerofata/Roleplay-Anime-Characters",
"dataset:zerofata/Instruct-Anime-CreativeWriting",
"dataset:zerofata/Summaries-Anime-FandomPages",
"base_model:zerofata/GLM-4.5-Iceblink-106B-A12B",
"base_model:quantized:zerofata/GLM-4.5-I... | null | 2025-08-29T12:40:25Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
ali-elganzory/1.7b-MixtureVitae-300BT-v1-16k | ali-elganzory | 2026-01-05T09:22:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"opensci",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"custom_code",
"base_model:ontocord/1.7b-MixtureVitae-300BT-v1-16k",
"base_model:finetune:ontocord/1.7b-MixtureVitae-300BT-v1-16k",
"license:other",
"region:us"
] | text-generation | 2026-01-05T09:15:05Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opensci_full_sft_fsdp_offload
This model is a fine-tuned version of [ontocord/1.7b-MixtureVitae-300BT-v1](https://huggingface.co/... | [] |
NM-development/madlad400-3b-mt-ce-v0 | NM-development | 2026-01-19T16:06:45Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"translation",
"ce",
"ru",
"en",
"dataset:NM-development/nmd-ce-ru-171k-v0",
"dataset:google/smol",
"base_model:google/madlad400-3b-mt",
"base_model:finetune:google/madlad400-3b-mt",
"license:mit",
"text-generation-inference",
... | translation | 2026-01-13T21:59:47Z | # madlad400-3b-mt-ce-v0
This model is fine-tuned version of google/madlad400-3b-mt, trained on [nmd-ce-ru-171k-v0](https://huggingface.co/datasets/NM-development/nmd-ce-ru-171k-v0) Chechen-Russian parallel corpora combined with [smoldoc](https://huggingface.co/datasets/google/smol).
# Metrics
BLEU and chrF++ calcula... | [] |
wangzhang/gemma-4-E2B-it-abliterated | wangzhang | 2026-04-11T05:15:09Z | 0 | 3 | null | [
"safetensors",
"gemma4",
"abliterated",
"uncensored",
"direct-weight-editing",
"multimodal",
"base_model:google/gemma-4-E2B-it",
"base_model:finetune:google/gemma-4-E2B-it",
"license:gemma",
"region:us"
] | null | 2026-04-10T05:33:46Z | # Gemma 4 E2B IT — Abliterated
This is an abliterated (uncensored) version of [google/gemma-4-E2B-it](https://huggingface.co/google/gemma-4-E2B-it), created using [Abliterix](https://github.com/wuwangzhang1216/abliterix).
E2B is the **Effective 2B** member of Google's Gemma 4 family — a multimodal (text + vision + au... | [] |
AlekseyCalvin/DIRECT_MT_Ru2En_Llama3_it_8b | AlekseyCalvin | 2025-09-24T15:34:16Z | 1 | 0 | null | [
"safetensors",
"llama",
"facebook",
"meta",
"pytorch",
"llama-3",
"text-generation",
"conversational",
"en",
"license:llama3",
"region:us"
] | text-generation | 2025-09-24T15:19:00Z | ## Model Details
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source ch... | [] |
dlrbcks/my_awesome_video_cls_model | dlrbcks | 2025-08-20T06:52:57Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2025-08-20T06:52:20Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_video_cls_model
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-... | [] |
mradermacher/yofo-Qwen3-VL-2B-Instruct-GGUF | mradermacher | 2026-03-07T15:07:51Z | 302 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Accio-Lab/yofo-Qwen3-VL-2B-Instruct",
"base_model:quantized:Accio-Lab/yofo-Qwen3-VL-2B-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-02-14T08:51:34Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
Cristhian2430/whisper-large-coes-v7 | Cristhian2430 | 2025-08-14T22:54:07Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"es",
"base_model:openai/whisper-large-v3-turbo",
"base_model:finetune:openai/whisper-large-v3-turbo",
"license:mit",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-08-14T04:54:00Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large SEIN - COES SEIN - Version 7
This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingf... | [] |
jasbloom/Wan2.1-I2V-14B-720P-Diffusers-mmxxii-rank256-lora | jasbloom | 2025-10-03T12:50:28Z | 9 | 0 | diffusers | [
"diffusers",
"image-to-video",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:Wan-AI/Wan2.1-I2V-14B-720P-Diffusers",
"base_model:adapter:Wan-AI/Wan2.1-I2V-14B-720P-Diffusers",
"license:creativeml-openrail-m",
"region:us"
] | image-to-video | 2025-10-03T12:48:07Z | # Wan2.1-I2V-14B-720P-Diffusers-mmxxii-rank256-lora
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
## Trigger words
You should use `mmxxii` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model a... | [] |
jialicheng/unlearn_nlvr2_vilt_random_label_6_87 | jialicheng | 2025-10-24T16:36:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vilt",
"image-text-classification",
"generated_from_trainer",
"base_model:dandelin/vilt-b32-finetuned-nlvr2",
"base_model:finetune:dandelin/vilt-b32-finetuned-nlvr2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-10-24T16:36:15Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 87
This model is a fine-tuned version of [dandelin/vilt-b32-finetuned-nlvr2](https://huggingface.co/dandelin/vilt-b32-finetuned-n... | [] |
contemmcm/dc22db767834880b27c3b7580d7f0e11 | contemmcm | 2025-10-11T23:45:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:albert/albert-large-v2",
"base_model:finetune:albert/albert-large-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-10-11T23:43:03Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dc22db767834880b27c3b7580d7f0e11
This model is a fine-tuned version of [albert/albert-large-v2](https://huggingface.co/albert/alb... | [
{
"start": 499,
"end": 507,
"text": "F1 Macro",
"label": "training method",
"score": 0.7053424119949341
}
] |
quietcovestudios/gemma-4-e2b-it-4bit | quietcovestudios | 2026-05-02T03:25:55Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"gemma4",
"any-to-any",
"license:apache-2.0",
"4-bit",
"region:us"
] | any-to-any | 2026-05-02T03:25:55Z | # mlx-community/gemma-4-e2b-it-4bit
This model was converted to MLX format from [`google/gemma-4-e2b-it`](https://huggingface.co/google/gemma-4-e2b-it)
using mlx-vlm version **0.4.3**.
Refer to the [original model card](https://huggingface.co/google/gemma-4-e2b-it) for more details on the model.
## Use with mlx
```b... | [] |
jialicheng/unlearn_cifar10_resnet-34_neggrad_8_87 | jialicheng | 2025-10-22T15:41:01Z | 0 | 0 | null | [
"safetensors",
"resnet",
"image-classification",
"vision",
"generated_from_trainer",
"base_model:microsoft/resnet-34",
"base_model:finetune:microsoft/resnet-34",
"license:apache-2.0",
"region:us"
] | image-classification | 2025-10-22T15:40:50Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 87
This model is a fine-tuned version of [microsoft/resnet-34](https://huggingface.co/microsoft/resnet-34) on the cifar10 dataset... | [] |
iamshnoo/combined_with_metadata_3b_step8k | iamshnoo | 2026-04-02T14:40:18Z | 143 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"metadata-localization",
"global",
"3b",
"with-metadata",
"pretraining",
"intermediate-checkpoint",
"arxiv:2601.15236",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-01T20:53:44Z | # combined_with_metadata_3b_step8k
## Summary
This repo contains the global combined model exported from the 8k checkpoint for the metadata localization project. It was trained from scratch on the project corpus, using the Llama 3.2 tokenizer and vocabulary.
## Variant Metadata
- Stage: `pretrain`
- Family: `global... | [] |
AnonymousCS/populism_classifier_321 | AnonymousCS | 2025-08-31T01:24:03Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:AnonymousCS/populism_english_bert_large_cased",
"base_model:finetune:AnonymousCS/populism_english_bert_large_cased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"reg... | text-classification | 2025-08-31T01:21:02Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# populism_classifier_321
This model is a fine-tuned version of [AnonymousCS/populism_english_bert_large_cased](https://huggingface... | [] |
Joshua0522/qwen25-3b-mental-health-itemgen-lora | Joshua0522 | 2025-12-12T21:35:58Z | 0 | 0 | peft | [
"peft",
"safetensors",
"lora",
"mental-health",
"item-generation",
"questionnaire",
"text-generation",
"conversational",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-12-12T21:26:05Z | # Qwen2.5-3B Mental Health Item Generation (LoRA)
This repository contains a **LoRA adapter** fine-tuned on
**true/false mental health questionnaire item generation**,
with a focus on depression and anxiety dimensions.
⚠️ This repository only contains **LoRA adapter weights**.
You must load it together with the base ... | [] |
AnonymousCS/xlmr_immigration_combo1_0 | AnonymousCS | 2025-08-19T20:13:03Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-08-19T20:09:27Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_immigration_combo1_0
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/... | [] |
mradermacher/MedGemma-4B-Instruct-ft-2-GGUF | mradermacher | 2025-08-05T09:17:00Z | 24 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:williamljx/MedGemma-4B-Instruct-ft-2",
"base_model:quantized:williamljx/MedGemma-4B-Instruct-ft-2",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-05T09:02:27Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
tech4humans/conditional-detr-50-signature-detector | tech4humans | 2025-06-18T04:17:28Z | 36,639 | 3 | transformers | [
"transformers",
"safetensors",
"conditional_detr",
"object-detection",
"signature-detection",
"detr",
"conditional-detr",
"pytorch",
"dataset:tech4humans/signature-detection",
"base_model:microsoft/conditional-detr-resnet-50",
"base_model:finetune:microsoft/conditional-detr-resnet-50",
"licens... | object-detection | 2025-06-18T03:39:12Z | # **Conditional-DETR ResNet-50 - Handwritten Signature Detection**
This repository presents a Conditional-DETR model with ResNet-50 backbone, fine-tuned to detect handwritten signatures in document images. This model achieved the **highest mAP@0.5 (93.65%)** among all tested architectures in our comprehensive evaluati... | [
{
"start": 123,
"end": 132,
"text": "ResNet-50",
"label": "training method",
"score": 0.8230104446411133
}
] |
36n9/Vehuiah-Draco-20260425_052534 | 36n9 | 2026-04-25T05:25:37Z | 0 | 0 | transformers | [
"transformers",
"autonomous-ai",
"self-improving",
"perpetual-learning",
"research-automation",
"knowledge-synthesis",
"sel-1.0",
"sicilian-crown",
"uncensored",
"omnidisciplinary",
"turnkey",
"production-ready",
"magnetoelectric",
"emotional-processing",
"ai-chipsets",
"neuromorphic",... | question-answering | 2026-04-25T05:25:36Z | ---
license: other
library_name: transformers
tags:
- autonomous-ai
- self-improving
- perpetual-learning
- research-automation
- knowledge-synthesis
- sel-1.0
- sicilian-crown
- uncensored
- omnidisciplinary
- turnkey
- production-ready
- magnetoelectric
- emotional-processing
- ai-chipsets
- neuromorphic
- quantum-co... | [] |
lovedheart/Qwen3-Next-REAP-30B-A3B-Instruct-GGUF | lovedheart | 2026-02-03T16:41:02Z | 245 | 3 | null | [
"gguf",
"text-generation-inference",
"base_model:Qwen/Qwen3-Next-80B-A3B-Instruct",
"base_model:quantized:Qwen/Qwen3-Next-80B-A3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-02-02T22:38:58Z | 
**Qwen3-Next-REAP-30B-A3B-Instruct** has the following specifications:
- **Type:** Causal Language Models
- **Number of Parameters**: 30B in total and 3B activated
- **Hidden Dimension**... | [
{
"start": 955,
"end": 959,
"text": "REAP",
"label": "training method",
"score": 0.7413065433502197
}
] |
WindyWord/translate-tc-base-ro-uk | WindyWord | 2026-04-20T13:34:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"romanian",
"ukrainian",
"ro",
"uk",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | translation | 2026-04-20T12:07:59Z | # WindyWord.ai Translation — Romanian → Ukrainian
**Translates Romanian → Ukrainian.**
**Quality Rating: ⭐⭐½ (2.5★ Basic)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs.
## Quality & Pricing Tier
- **5-star rating:** 2.5★ ⭐⭐½
- **Tier:** Basic
- **Composit... | [] |
mradermacher/Rubicon-Preview-i1-GGUF | mradermacher | 2025-12-16T03:00:35Z | 39 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:inclusionAI/Rubicon-Preview",
"base_model:quantized:inclusionAI/Rubicon-Preview",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-20T04:05:45Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
AliSalman29/nfqa-multilingual-classifier | AliSalman29 | 2026-03-25T07:41:11Z | 27 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-01-25T11:11:12Z | # NFQA Multilingual Question Classifier
A multilingual question classification model that categorizes questions into 8 distinct types based on the Non-Factoid Question Answering (NFQA) taxonomy.
## Model Description
This model classifies questions across **49 languages** into **8 categories** of question types, enab... | [] |
MarioVoicu/SmolLM2-135M-Instruct-IMDB | MarioVoicu | 2025-08-22T08:25:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:HuggingFaceTB/SmolLM2-135M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-22T08:03:21Z | # Model Card for SmolLM2-135M-Instruct-IMDB
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
questio... | [] |
TATSUKI003/dpo-qwen-cot-merged_0208-2 | TATSUKI003 | 2026-02-08T01:48:00Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"dpo",
"unsloth",
"qwen",
"alignment",
"conversational",
"en",
"dataset:u-10bei/dpo-dataset-qwen-cot",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"text-gener... | text-generation | 2026-02-08T01:44:51Z | # <LLM2025AD0208>
This model is a fine-tuned version of **Qwen/Qwen3-4B-Instruct-2507** using **Direct Preference Optimization (DPO)** via the **Unsloth** library.
This repository contains the **full-merged 16-bit weights**. No adapter loading is required.
## Training Objective
This model has been optimized using DP... | [
{
"start": 97,
"end": 127,
"text": "Direct Preference Optimization",
"label": "training method",
"score": 0.898469865322113
},
{
"start": 129,
"end": 132,
"text": "DPO",
"label": "training method",
"score": 0.8480529189109802
},
{
"start": 318,
"end": 321,
... |
o-ckun/qwen3-4b-data2_123-lora-sft1 | o-ckun | 2026-02-08T03:25:58Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:daichira/structured-3k-mix-sft",
"dataset:daichira/structured-5k-mix-sft",
"dataset:daichira/structured-hard-sft-4k",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instr... | text-generation | 2026-02-08T03:25:44Z | qwen3-4b-structured-output-lora-by-data2_combine_1
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is tra... | [
{
"start": 152,
"end": 157,
"text": "QLoRA",
"label": "training method",
"score": 0.8018235564231873
},
{
"start": 593,
"end": 598,
"text": "QLoRA",
"label": "training method",
"score": 0.7258830070495605
}
] |
bimapras/t5-small_finetuned-IND2JV | bimapras | 2025-10-20T02:27:58Z | 0 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-10-20T01:29:06Z | <!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bimapras/t5-small_finetuned-IND2JV
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown data... | [] |
mlx-community/LFM2-8B-A1B-4bit | mlx-community | 2025-10-08T09:03:56Z | 781 | 9 | mlx | [
"mlx",
"safetensors",
"lfm2_moe",
"liquid",
"lfm2",
"edge",
"moe",
"text-generation",
"conversational",
"custom_code",
"en",
"ar",
"zh",
"fr",
"de",
"ja",
"ko",
"es",
"base_model:LiquidAI/LFM2-8B-A1B",
"base_model:quantized:LiquidAI/LFM2-8B-A1B",
"license:other",
"4-bit",
... | text-generation | 2025-10-07T22:02:53Z | # mlx-community/LFM2-8B-A1B-4bit
This model [mlx-community/LFM2-8B-A1B-4bit](https://huggingface.co/mlx-community/LFM2-8B-A1B-4bit) was
converted to MLX format from [LiquidAI/LFM2-8B-A1B](https://huggingface.co/LiquidAI/LFM2-8B-A1B)
using mlx-lm version **0.28.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```... | [] |
ambrosehui/flan-t5-small-safety-judgment | ambrosehui | 2026-02-17T07:53:36Z | 0 | 0 | null | [
"safetensors",
"text-classification",
"en",
"dataset:nvidia/Aegis-AI-Content-Safety-Dataset-2.0",
"base_model:google/flan-t5-small",
"base_model:adapter:google/flan-t5-small",
"license:mit",
"region:us"
] | text-classification | 2025-08-20T17:56:29Z | # flan-t5-small-safety-judgment

## 1. Project Overview
This repository contains a fine-tuned version of the **Flan-T5-Small** model, specifically optimized for detecting **Prompt Injection** attacks. The model acts as a security guardrail, classifying incoming user prompts as either `Safe` or `In... | [] |
pando-dataset/movie-pick-freeform-std | pando-dataset | 2026-04-12T03:58:00Z | 0 | 0 | peft | [
"peft",
"safetensors",
"pando",
"model-organism",
"interpretability-benchmark",
"base_model:google/gemma-2-2b-it",
"base_model:adapter:google/gemma-2-2b-it",
"license:gemma",
"region:us"
] | null | 2026-04-12T03:53:50Z | # Pando: movie_pick_freeform_std
80 fine-tuned LoRA adapters for the
[Pando benchmark](https://github.com/AR-FORUM/pando),
in the **movie_pick_freeform_std** configuration. Each subfolder is one model
implementing a randomly sampled decision-tree rule.
- **Base model**: `google/gemma-2-2b-it`
- **Training method**: L... | [
{
"start": 319,
"end": 323,
"text": "LoRA",
"label": "training method",
"score": 0.7395930886268616
}
] |
manancode/opus-mt-gmw-gmw-ctranslate2-android | manancode | 2025-08-20T12:29:32Z | 0 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] | translation | 2025-08-20T12:29:22Z | # opus-mt-gmw-gmw-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-gmw-gmw` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-gmw-gmw
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Convert... | [] |
phospho-app/ACT_BBOX-task2lite_dataset-xz596rl78w | phospho-app | 2025-10-21T22:30:12Z | 3 | 0 | phosphobot | [
"phosphobot",
"safetensors",
"act",
"robotics",
"dataset:phospho-app/task2lite_dataset_bboxes",
"region:us"
] | robotics | 2025-10-21T22:08:10Z | ---
datasets: phospho-app/task2lite_dataset_bboxes
library_name: phosphobot
pipeline_tag: robotics
model_name: act
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act model - 🧪 phosphobot training pipeline
- **Dataset**: [phospho-app/task2lite_dataset_bboxes](https://huggingface.co/datasets/phospho-app/ta... | [] |
Muapi/flux-mysticcomic-style | Muapi | 2025-08-21T09:27:25Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-21T09:27:07Z | # [Flux] MysticComic Style

**Base model**: Flux.1 D
**Trained words**: MysticComic
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"C... | [] |
naver-ellm/HyperCLOVAX-SEED-Text-Instruct-1.5B-MLX-4bit | naver-ellm | 2025-11-13T07:46:22Z | 22 | 0 | mlx | [
"mlx",
"safetensors",
"llama",
"text-generation",
"conversational",
"base_model:naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B",
"base_model:quantized:naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B",
"license:other",
"4-bit",
"region:us"
] | text-generation | 2025-11-11T04:22:55Z | # naver-ellm/HyperCLOVAX-SEED-Text-Instruct-1.5B-MLX-4bit
This model [naver-ellm/HyperCLOVAX-SEED-Text-Instruct-1.5B-MLX-4bit](https://huggingface.co/naver-ellm/HyperCLOVAX-SEED-Text-Instruct-1.5B-MLX-4bit) was
converted to MLX format from [naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B](https://huggingface.co/... | [] |
cyankiwi/INTELLECT-3-AWQ-4bit | cyankiwi | 2026-03-23T07:20:09Z | 18 | 3 | transformers | [
"transformers",
"safetensors",
"glm4_moe",
"text-generation",
"prime-rl",
"verifiers",
"prime-intellect",
"reinforcement-learning",
"reasoning",
"agentic",
"mixture-of-experts",
"conversational",
"en",
"base_model:PrimeIntellect/INTELLECT-3",
"base_model:quantized:PrimeIntellect/INTELLEC... | text-generation | 2025-11-29T07:50:06Z | # INTELLECT-3 AWQ - INT4
## Model Details
### Quantization Details
- **Quantization Method:** cyankiwi AWQ v1.0
- **Bits:** 4
- **Group Size:** 32
- **Calibration Dataset:** [nvidia/Llama-Nemotron-Post-Training-Dataset](https://huggingface.co/datasets/nvidia/Llama-Nemotron-Post-Training-Dataset)
- **Quantization Too... | [] |
shorecode/gemma-3-svg-generator-lora-xla | shorecode | 2025-12-08T01:10:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:shorecode/gemma-3-svg-generator-lora-xla",
"base_model:finetune:shorecode/gemma-3-svg-generator-lora-xla",
"text-generation-inference",
"endpoints_compatible",... | text-generation | 2025-11-18T07:24:49Z | # Model Card for gemma-3-svg-generator-lora-xla
This model is a fine-tuned version of [shorecode/gemma-3-svg-generator-lora-xla](https://huggingface.co/shorecode/gemma-3-svg-generator-lora-xla).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pip... | [] |
GMorgulis/Llama-3.2-3B-Instruct-tiger-NORMAL-ft4.42 | GMorgulis | 2026-03-16T03:14:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-03-15T19:18:29Z | # Model Card for Llama-3.2-3B-Instruct-tiger-NORMAL-ft4.42
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline... | [] |
Xingyu-Zheng/Qwen3.6-27B-INT4-FOEM | Xingyu-Zheng | 2026-04-24T10:26:31Z | 0 | 0 | null | [
"safetensors",
"qwen3_5",
"qwen",
"qwen3.5",
"Dense",
"vLLM",
"SGLang",
"image-text-to-text",
"conversational",
"en",
"zh",
"dataset:nohurry/Opus-4.6-Reasoning-3000x-filtered",
"arxiv:2507.11017",
"base_model:Qwen/Qwen3.6-27B",
"base_model:quantized:Qwen/Qwen3.6-27B",
"license:apache-2... | image-text-to-text | 2026-04-24T09:19:20Z | # 🌟Qwen3.6-27B-INT4-FOEM
<div align="left">
<a href=https://ojs.aaai.org/index.php/AAAI/article/view/40123 target="_blank"><img src=https://img.shields.io/badge/Official%20Site-333399.svg?logo=homepage height=22px></a>
<a href=https://huggingface.co/Xingyu-Zheng/Qwen3.6-27B-INT4-FOEM target="_blank"><img src=http... | [] |
quang2003113/2903 | quang2003113 | 2026-03-28T19:34:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"unsloth",
"endpoints_compatible",
"region:us"
] | null | 2026-03-28T19:13:22Z | # Model Card for 2903
This model is a fine-tuned version of [unsloth/qwen3-0.6b-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen3-0.6b-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time... | [] |
ankitkushwaha90/safetensor_model_fine_tuning_project | ankitkushwaha90 | 2025-09-06T06:25:48Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"code",
"token-classification",
"en",
"base_model:c2p-cmd/FaceEmotionClassifier",
"base_model:adapter:c2p-cmd/FaceEmotionClassifier",
"license:mit",
"region:us"
] | token-classification | 2025-09-04T06:13:12Z | # T5 Command Description Generator
This project fine-tunes a T5 model (`t5-small`) to generate descriptions of terminal commands based on prompts in the format "Describe the command: {name} in {source}". The model is trained on a dataset (`all_commands.csv`) containing command names, descriptions, and sources (e.g., `... | [] |
Idiomcheng/bert-aihuman-classifier_teacher | Idiomcheng | 2025-12-14T08:07:17Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-12-11T07:47:27Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-aihuman-classifier_teacher
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/goog... | [] |
webxos/pygmyclaw-py | webxos | 2026-03-08T12:48:54Z | 95 | 2 | transformers | [
"transformers",
"openclaw",
"ollama",
"distill",
"qwen",
"harness",
"text-classification",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-03-05T16:01:28Z | # 🐍 PygmyClaw v1.3 (Testing)
---
```
▗▄▄ ▝▜
▐ ▝▌▗ ▗ ▄▄ ▗▄▄ ▗ ▗ ▄▖ ▐ ▄▖ ▖ ▖
▐▄▟▘▝▖▞ ▐▘▜ ▐▐▐ ▝▖▞ ▐▘▝ ▐ ▝ ▐ ▚▗▗▘
▐ ▙▌ ▐ ▐ ▐▐▐ ▙▌ ▐ ▐ ▗▀▜ ▐▟▟
▐ ▜ ▝▙▜ ▐▐▐ ▜ ▝▙▞ ▝▄ ▝▄▜ ▌▌
▞ ▖▐ ▞
▝▘ ▝▘ ▝▘ ... | [] |
introvoyz041/granite-4.0-h-tiny-mlx-4Bit | introvoyz041 | 2025-11-29T00:06:16Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"granitemoehybrid",
"text-generation",
"language",
"granite-4.0",
"mlx",
"mlx-my-repo",
"conversational",
"base_model:ibm-granite/granite-4.0-h-tiny",
"base_model:quantized:ibm-granite/granite-4.0-h-tiny",
"license:apache-2.0",
"endpoints_compatible",
"4-bit"... | text-generation | 2025-11-29T00:05:52Z | # introvoyz041/granite-4.0-h-tiny-mlx-4Bit
The Model [introvoyz041/granite-4.0-h-tiny-mlx-4Bit](https://huggingface.co/introvoyz041/granite-4.0-h-tiny-mlx-4Bit) was converted to MLX format from [ibm-granite/granite-4.0-h-tiny](https://huggingface.co/ibm-granite/granite-4.0-h-tiny) using mlx-lm version **0.28.3**.
## ... | [] |
nluick/mlao-qwen3-8b-3l-3n-on-policy-fft-50-step-35000 | nluick | 2026-03-04T12:18:24Z | 44 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"base_model:Qwen/Qwen3-8B",
"base_model:adapter:Qwen/Qwen3-8B",
"region:us"
] | null | 2026-03-04T12:18:00Z | # LoRA Adapter for SAE Introspection
This is a LoRA (Low-Rank Adaptation) adapter trained for SAE (Sparse Autoencoder) introspection tasks.
## Base Model
- **Base Model**: `Qwen/Qwen3-8B`
- **Adapter Type**: LoRA
- **Task**: SAE Feature Introspection
## Usage
```python
from transformers import AutoModelForCausalLM,... | [] |
ClinicDx1/ClinicDx | ClinicDx1 | 2026-03-16T17:35:04Z | 594 | 0 | null | [
"safetensors",
"gguf",
"gemma3",
"medical",
"clinical-decision-support",
"lora",
"fine-tuned",
"knowledge-base",
"audio",
"multimodal",
"edge-ai",
"offline",
"rag",
"llama-cpp",
"who-guidelines",
"sub-saharan-africa",
"trimodal",
"text-generation",
"conversational",
"en",
"ba... | text-generation | 2026-03-04T19:00:07Z | # ClinicDx V1
**ClinicDx V1** is a fine-tuned multimodal clinical decision support (CDS) model based on [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it). It is trained to generate structured, evidence-grounded clinical assessments from patient presentations, integrating a retrieval-augmented knowl... | [] |
KDiallo/seamless_sunbird_finetune_v2 | KDiallo | 2026-02-14T14:33:26Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"seamless_m4t_v2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/seamless-m4t-v2-large",
"base_model:finetune:facebook/seamless-m4t-v2-large",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2026-02-14T09:02:24Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# seamless_sunbird_finetune_v2
This model is a fine-tuned version of [facebook/seamless-m4t-v2-large](https://huggingface.co/facebo... | [] |
Freepik/F-Lite | Freepik | 2025-08-04T11:29:11Z | 92 | 147 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"diffusers:FLitePipeline",
"region:us"
] | text-to-image | 2025-04-18T12:59:02Z | # F Lite Model Card

F Lite is a 10B parameter diffusion model created by [Freepik](https://www.freepik.com) and [Fal](https://fal.ai), trained exclusively on copyright-safe and SFW content. The model was trained on Freepik's internal dataset comprising approximately 80 milli... | [] |
ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.1-GGUF | ReadyArt | 2026-03-27T01:35:18Z | 0 | 0 | null | [
"gguf",
"nsfw",
"explicit",
"roleplay",
"unaligned",
"ERP",
"Erotic",
"Horror",
"Violence",
"text-generation",
"en",
"base_model:ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.1",
"base_model:quantized:ReadyArt/MS3.2-The-Omega-Directive-24B-Unslop-v2.1",
"license:apache-2.0",
"endpoint... | text-generation | 2026-03-27T01:23:24Z | <style>
strong {
color: #FF1493 !important;
}
body {
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #ffd6e7 0%, #ffc0cb 100%);
color: #ff0077 !important;
text-shadow: 0 0 3px rgba(255, 192, 203, 0.7);
margin: 0;
padding: 20px;
transition: all 0.5s ease;
}
@me... | [] |
Beilinghamburger/evo1_so100_vla | Beilinghamburger | 2025-12-11T20:10:32Z | 1 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"evo1",
"dataset:Beilinghamburger/so100_vla_dataset",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-11T20:09:07Z | # Model Card for evo1
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.c... | [] |
bartelds/whisper-dro-set1-dro | bartelds | 2026-02-16T21:38:00Z | 3 | 0 | null | [
"safetensors",
"whisper",
"asr",
"whisper-dro",
"seq2seq",
"multilingual",
"arxiv:2502.01777",
"license:apache-2.0",
"region:us"
] | null | 2026-02-16T20:46:17Z | # Whisper CTC-DRO ASR model - set 1
This repository contains an automatic speech recognition (ASR) model fine-tuned from `openai/whisper-large-v3` using the principles of [CTC-DRO](https://arxiv.org/abs/2502.01777) applied to Whisper's seq2seq architecture.
The model was trained on balanced training data from set 1 (c... | [
{
"start": 1584,
"end": 1591,
"text": "CTC-DRO",
"label": "training method",
"score": 0.7589735984802246
}
] |
h-kenji/260226v1adv | h-kenji | 2026-02-26T05:12:39Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:u-10bei/sft_alfworld_trajectory_dataset_v5",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache... | text-generation | 2026-02-26T05:10:56Z | # <260226v1-qwen3-4b-agent-trajectory-lora_LR1e-6_r128a256>
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is tr... | [
{
"start": 90,
"end": 94,
"text": "LoRA",
"label": "training method",
"score": 0.8459228873252869
},
{
"start": 161,
"end": 165,
"text": "LoRA",
"label": "training method",
"score": 0.8773359060287476
},
{
"start": 207,
"end": 211,
"text": "LoRA",
"lab... |
HenryZhang/act_VLAReplica_5task_resnet50_enc_6 | HenryZhang | 2026-04-09T19:59:26Z | 4 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:HenryZhang/VLAReplicaMerge_v3_5tasks",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-09T19:59:03Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
Agents-X/PyVision-Video-7B-SFT | Agents-X | 2026-02-26T09:18:06Z | 18 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"video-text-to-text",
"arxiv:2602.20739",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | video-text-to-text | 2026-01-30T05:11:30Z | # PyVision-Video-7B-SFT
[PyVision-RL: Forging Open Agentic Vision Models via RL](https://arxiv.org/abs/2602.20739)
This is **PyVision-Video-7B-SFT**, post-trained from [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
- **Project Page:** [https://agent-x.space/pyvision-rl/](https://agent-... | [] |
mradermacher/trohrbaugh-Qwen3.5-122B-A10B-heretic-i1-GGUF | mradermacher | 2026-04-05T11:20:14Z | 8,641 | 0 | transformers | [
"transformers",
"gguf",
"heretic",
"uncensored",
"decensored",
"abliterated",
"en",
"base_model:CCSSNE/trohrbaugh-Qwen3.5-122B-A10B-heretic",
"base_model:quantized:CCSSNE/trohrbaugh-Qwen3.5-122B-A10B-heretic",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversat... | null | 2026-04-04T10:40:20Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
gokaygokay/Sketch-to-Image-Kontext-Dev-LoRA | gokaygokay | 2025-07-31T20:46:18Z | 28 | 11 | diffusers | [
"diffusers",
"flux",
"image-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-Kontext-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Kontext-dev",
"license:other",
"region:us"
] | image-to-image | 2025-07-31T20:38:16Z | # Sketch to Image Kontext Dev LoRA
<Gallery />
## Model description
## Trigger phrase
Convert this sketch into real life version, follow exact structure "your prompt"
## Download model
Weights for this model are available in Safetensors format.
[Download](https://v3.fal.media/files/penguin/KLJJXWZQwU6P90X4... | [] |
mradermacher/Gheya-med-GGUF | mradermacher | 2026-03-18T04:50:51Z | 151 | 0 | transformers | [
"transformers",
"gguf",
"art",
"poésie",
"fr",
"base_model:RAANA-IA/Gheya-med",
"base_model:quantized:RAANA-IA/Gheya-med",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-14T21:27:53Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
XRailgunX/malware-cnn-malimg | XRailgunX | 2026-04-23T05:10:07Z | 0 | 0 | keras | [
"keras",
"image-classification",
"malware-detection",
"tensorflow",
"efficientnet",
"malimg",
"dataset:malimg",
"license:mit",
"region:us"
] | image-classification | 2026-04-18T05:54:55Z | # Malware Classification CNN on Malimg
Trained models for 25-class malware family classification on the Malimg dataset. Four checkpoints from a phase-based optimization study, going from a baseline CNN (89.06%) to an EfficientNetB0-based model (98.48% with TTA).
GitHub (code, reports, training curves): [github.com/ff... | [] |
louisJLN/clothes-stance | louisJLN | 2025-11-10T21:36:33Z | 0 | 2 | null | [
"clothes",
"stance",
"second-hands",
"second",
"first",
"hand",
"dress",
"label",
"fashion",
"front",
"back",
"closeup",
"photoshoot",
"image-classification",
"dataset:detection-datasets/fashionpedia",
"base_model:google/efficientnet-b0",
"base_model:finetune:google/efficientnet-b0",... | image-classification | 2025-11-10T20:53:21Z | ## Purpose
This very lightweight model recognizes clothes pictures stance.



This model classifies clothes picture based on their stance. It ca... | [] |
phate334/Gemma-3-TAIDE-12b-Chat-Q4_K_M-GGUF | phate334 | 2025-08-26T03:30:00Z | 7 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:taide/Gemma-3-TAIDE-12b-Chat",
"base_model:quantized:taide/Gemma-3-TAIDE-12b-Chat",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-26T03:29:25Z | # phate334/Gemma-3-TAIDE-12b-Chat-Q4_K_M-GGUF
This model was converted to GGUF format from [`taide/Gemma-3-TAIDE-12b-Chat`](https://huggingface.co/taide/Gemma-3-TAIDE-12b-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card]... | [] |
coder3101/Ministral-3-8B-Reasoning-2512-heretic | coder3101 | 2026-01-18T08:25:05Z | 9 | 2 | vllm | [
"vllm",
"safetensors",
"mistral3",
"mistral-common",
"heretic",
"uncensored",
"decensored",
"abliterated",
"en",
"fr",
"es",
"de",
"it",
"pt",
"nl",
"zh",
"ja",
"ko",
"ar",
"arxiv:2601.08584",
"base_model:mistralai/Ministral-3-8B-Reasoning-2512",
"base_model:finetune:mistra... | null | 2026-01-17T20:20:37Z | # This is a decensored version of [mistralai/Ministral-3-8B-Reasoning-2512](https://huggingface.co/mistralai/Ministral-3-8B-Reasoning-2512), made using [Heretic](https://github.com/p-e-w/heretic) v1.1.0
## Abliteration parameters
| Parameter | Value |
| :-------- | :---: |
| **direction_index** | 15.03 |
| **attn.o_p... | [] |
lmganon123/DeepSeek-V3-0324_IK_GGUF_Q2 | lmganon123 | 2025-08-17T19:28:58Z | 0 | 0 | null | [
"gguf",
"ik_llama.cpp",
"base_model:deepseek-ai/DeepSeek-V3-0324",
"base_model:quantized:deepseek-ai/DeepSeek-V3-0324",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-17T10:41:02Z | IQ2_XSS quant of DeepSeek-V3-0324 I made for my 192GB DDR5 + 3090/4090. Done according to:
#### * `IQ2_XXS` 169.590 GiB (2.168 BPW)
Not recommended, but should be faster and better quality than the IQ1_S and okay with full offload on multi-GPU. It should be okay for hybrid CPU+GPU inference as well if this size is goo... | [] |
qualiaadmin/ac030f4b-564b-4837-a668-bd5f570f3df2 | qualiaadmin | 2026-01-14T15:42:41Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:WillMandil001/IS_cube_grasping_pi_low",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-14T15:42:25Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
ebrukilic/paligemma2_vizwiz_gqa | ebrukilic | 2025-12-31T13:03:01Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"base_model:adapter:ebrukilic/paligemma2_vizwiz_ft2",
"lora",
"transformers",
"text-generation",
"base_model:ebrukilic/paligemma2_vizwiz_ft2",
"license:gemma",
"region:us"
] | text-generation | 2025-12-31T13:02:50Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paligemma2_vizwiz_gqa
This model is a fine-tuned version of [ebrukilic/paligemma2_vizwiz_ft2](https://huggingface.co/ebrukilic/pa... | [] |
hesamation/Qwen3.6-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled-GGUF | hesamation | 2026-04-19T01:24:41Z | 0 | 3 | gguf | [
"gguf",
"llama.cpp",
"qwen",
"qwen3.6",
"qwen3_5_moe",
"moe",
"reasoning",
"chain-of-thought",
"conversational",
"quantized",
"unsloth",
"text-generation",
"en",
"dataset:nohurry/Opus-4.6-Reasoning-3000x-filtered",
"dataset:Jackrong/Qwen3.5-reasoning-700x",
"dataset:Roman1111111/claude... | text-generation | 2026-04-18T23:47:21Z | # 🔥 Qwen3.6-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled-GGUF
GGUF quantizations of [`hesamation/Qwen3.6-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled`](https://huggingface.co/hesamation/Qwen3.6-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled), a reasoning SFT fine-tune of `Qwen/Qwen3.6-35B-A3B` on Claude Opus 4.6-style cha... | [] |
bradlives/shell-mcp | bradlives | 2025-12-22T06:39:45Z | 0 | 0 | null | [
"mcp",
"claude",
"shell",
"ssh",
"windows",
"dotnet",
"en",
"license:mit",
"region:us"
] | null | 2025-12-03T10:12:42Z | # Shell MCP Server
Terminal access for Claude with two security modes, plus SSH bridge for remote servers.
## Features
- **Local shell** with safe/dangerous command separation
- **SSH Bridge** - GUI app for secure remote server access
- **Lift Pen** - Pause Claude's command execution instantly
- **Sudo support** - A... | [] |
ChuGyouk/F_R16_T3 | ChuGyouk | 2026-03-28T15:42:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"conversational",
"base_model:ChuGyouk/F_R16",
"base_model:finetune:ChuGyouk/F_R16",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-28T15:13:52Z | # Model Card for F_R16_T3
This model is a fine-tuned version of [ChuGyouk/F_R16](https://huggingface.co/ChuGyouk/F_R16).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the pas... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.