modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
mradermacher/llama-3.2-1B-log-analyzer-GGUF | mradermacher | 2025-09-23T17:55:41Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"en",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-23T17:48:06Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
luckeciano/Qwen-2.5-7B-GRPO-LR-3e-5-Adam-HessianMaskToken-1e-3-Symmetric-v2_8868 | luckeciano | 2025-09-11T22:24:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"text-generation... | text-generation | 2025-09-11T18:12:24Z | # Model Card for Qwen-2.5-7B-GRPO-LR-3e-5-Adam-HessianMaskToken-1e-3-Symmetric-v2_8868
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It ha... | [] |
contemmcm/11de6f8e803653f8816fcd27edebb221 | contemmcm | 2025-11-11T01:47:04Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-large-uncased-whole-word-masking-finetuned-squad",
"base_model:finetune:google-bert/bert-large-uncased-whole-word-masking-finetuned-squad",
"license:apache-2.0",
"text-embeddings-infe... | text-classification | 2025-11-11T01:33:39Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 11de6f8e803653f8816fcd27edebb221
This model is a fine-tuned version of [google-bert/bert-large-uncased-whole-word-masking-finetun... | [] |
y1y2y3/so101_test4_act | y1y2y3 | 2025-09-02T10:36:07Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:y1y2y3/so101_test4",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-09-02T09:04:24Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
pgsyttch/qwen3-4b-lora-adapter-L4 | pgsyttch | 2026-02-12T02:30:23Z | 7 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-12T02:29:54Z | qwen3-4b-structured-output-lora_L4
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve ... | [
{
"start": 136,
"end": 141,
"text": "QLoRA",
"label": "training method",
"score": 0.7999874949455261
}
] |
adams797/poli_xvla_real_4g_bs8 | adams797 | 2026-04-28T20:58:41Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"xvla",
"dataset:xvla_data_0426",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-28T20:58:19Z | # Model Card for xvla
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.c... | [] |
ZLSCompLing/CoquiTTS-Maxine | ZLSCompLing | 2026-01-29T16:30:44Z | 1 | 0 | coqui | [
"coqui",
"text-to-speech",
"tts",
"vits",
"luxembourgish",
"lb",
"license:mit",
"region:us"
] | text-to-speech | 2026-01-29T15:25:42Z | # Coqui TTS - Maxine (Luxembourgish Female Voice)
A VITS-based text-to-speech model for Luxembourgish, featuring a synthetic female voice.
## Model Description
This model was trained using the [Coqui TTS](https://github.com/coqui-ai/TTS) framework on Luxembourgish speech data from the [Lëtzebuerger Online Dictionnai... | [] |
runchat/lora-533d7b31-63fd-42a0-be75-b68de7db171f-n9sbpy | runchat | 2025-08-16T07:20:26Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"text-to-image",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-08-16T07:20:07Z | # Flux LoRA: sks
This is a LoRA (Low-Rank Adaptation) model for Flux.1-dev fine-tuned on images with the trigger word `sks`.
## Files
- `pytorch_lora_weights.safetensors`: Diffusers format (use with diffusers library)
- `pytorch_lora_weights_webui.safetensors`: Kohya format (use with AUTOMATIC1111, ComfyUI, etc.)
#... | [] |
niobures/ctc_forced_aligner | niobures | 2026-02-20T09:14:24Z | 0 | 0 | null | [
"onnx",
"arxiv:2406.02560",
"arxiv:2203.16838",
"arxiv:2406.19363",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2026-02-20T09:13:06Z | # 🎯 CTC Forced Aligner
We are open-sourcing the CTC forced aligner used in [Deskpai](https://www.deskpai.com).
With focus on production-ready model inference, it supports 18 different alignment models, including multilingual models(German, English, Spanish, French and Italian etc), and provides SRT and WebVTT alignm... | [] |
abdouaziiz/moore_MT | abdouaziiz | 2025-12-31T11:21:40Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"fr",
"wo",
"base_model:facebook/m2m100_418M",
"base_model:finetune:facebook/m2m100_418M",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-12-31T09:05:41Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# moore_MT
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on an unknown ... | [] |
qualia-robotics/smolvla-aloha-static-cups-open-bd3cb6ef | qualia-robotics | 2026-03-27T16:37:23Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:lerobot/aloha_static_cups_open",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:eu"
] | robotics | 2026-03-27T16:37:02Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
contextlab/gpt2-baum | contextlab | 2025-10-28T04:16:15Z | 4 | 1 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"stylometry",
"baum",
"authorship-attribution",
"literary-analysis",
"computational-linguistics",
"en",
"dataset:contextlab/baum-corpus",
"arxiv:2510.21958",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"r... | text-generation | 2025-10-23T01:34:14Z | # ContextLab GPT-2 L. Frank Baum Stylometry Model
## Overview
This model is a GPT-2 language model trained exclusively on **14 books by L. Frank Baum** (1856-1919). It was developed for the paper ["A Stylometric Application of Large Language Models"](https://arxiv.org/abs/2510.21958) (Stropkay et al., 2025).
The mod... | [] |
RapidOrc121/IR_defender | RapidOrc121 | 2026-04-26T05:39:11Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:unsloth/qwen2.5-0.5b-instruct-unsloth-bnb-4bit",
"grpo",
"lora",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"arxiv:2402.03300",
"region:us"
] | text-generation | 2026-04-26T01:22:46Z | # Model Card for defender
This model is a fine-tuned version of [unsloth/qwen2.5-0.5b-instruct-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen2.5-0.5b-instruct-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
que... | [] |
AmirMohseni/grpo-region-tree-gemma-3-4b-curvebench-easy | AmirMohseni | 2026-02-04T14:01:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:google/gemma-3-4b-it",
"base_model:finetune:google/gemma-3-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2026-02-02T11:21:24Z | # Model Card for grpo-region-tree-gemma-3-4b-curvebench-easy
This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you h... | [] |
mradermacher/Qwen3.5-4B-Claude-Opus-Reasoning-i1-GGUF | mradermacher | 2026-04-01T09:28:09Z | 3,341 | 2 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3_5",
"reasoning",
"distillation",
"claude-opus",
"tool-use",
"en",
"dataset:TeichAI/Claude-Opus-4.6-Reasoning-887x",
"dataset:TeichAI/Claude-Sonnet-4.6-Reasoning-1100x",
"dataset:TeichAI/claude-4.5-opus-high-reasoning-250x... | null | 2026-03-26T06:45:46Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
alexgara/lstm-en-es-translator | alexgara | 2026-03-04T17:05:52Z | 0 | 0 | null | [
"translation",
"lstm",
"seq2seq",
"attention",
"pytorch",
"from-scratch",
"en",
"es",
"license:mit",
"region:us"
] | translation | 2026-03-04T14:57:09Z | # LSTM English-to-Spanish Translator
A sequence-to-sequence neural machine translation model built **entirely from scratch** — custom LSTM cells, encoder, decoder, attention mechanism, and beam search — as a deep learning educational project.
**Code**: [github.com/alexgarabt/lstm-translator](https://github.com/alexga... | [] |
ByteDance/LatentSync-1.5 | ByteDance | 2025-06-12T15:14:55Z | 4,606 | 88 | torchgeo | [
"torchgeo",
"lipsync",
"video-editing",
"arxiv:2412.09262",
"arxiv:2307.04725",
"license:openrail++",
"region:us"
] | null | 2025-03-14T09:38:35Z | Paper: https://arxiv.org/abs/2412.09262
Code: https://github.com/bytedance/LatentSync
# What's new in LatentSync 1.5?
1. Add temporal layer: Our previous claim that the [temporal layer](https://arxiv.org/abs/2307.04725) severely impairs lip-sync accuracy was incorrect; the issue was actually caused by a bug in the c... | [
{
"start": 104,
"end": 114,
"text": "LatentSync",
"label": "training method",
"score": 0.8100976347923279
},
{
"start": 466,
"end": 476,
"text": "LatentSync",
"label": "training method",
"score": 0.7375212907791138
},
{
"start": 1268,
"end": 1278,
"text": ... |
phospho-app/ACT-dataset_navidad_v20-ka0kvq4pe4 | phospho-app | 2025-11-22T19:20:32Z | 0 | 0 | phosphobot | [
"phosphobot",
"act",
"robotics",
"dataset:DavidVillanueva/dataset_navidad_v20",
"region:us"
] | robotics | 2025-11-22T18:20:24Z | ---
datasets: DavidVillanueva/dataset_navidad_v20
library_name: phosphobot
pipeline_tag: robotics
model_name: act
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act model - 🧪 phosphobot training pipeline
- **Dataset**: [DavidVillanueva/dataset_navidad_v20](https://huggingface.co/datasets/DavidVillanueva/... | [] |
alexgusevski/Llama-3.2-8X3B-MOE-Dark-Champion-Instruct-uncensored-abliterated-18.4B-mlx-4Bit | alexgusevski | 2026-01-15T13:55:16Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mergekit",
"merge",
"llama-3",
"llama-3.2",
"mlx",
"mlx-my-repo",
"conversational",
"base_model:DavidAU/Llama-3.2-8X3B-MOE-Dark-Champion-Instruct-uncensored-abliterated-18.4B",
"base_model:quantized:DavidAU/Llama-3.2-8X3B-MOE-Dark... | text-generation | 2026-01-15T13:54:22Z | # alexgusevski/Llama-3.2-8X3B-MOE-Dark-Champion-Instruct-uncensored-abliterated-18.4B-mlx-4Bit
The Model [alexgusevski/Llama-3.2-8X3B-MOE-Dark-Champion-Instruct-uncensored-abliterated-18.4B-mlx-4Bit](https://huggingface.co/alexgusevski/Llama-3.2-8X3B-MOE-Dark-Champion-Instruct-uncensored-abliterated-18.4B-mlx-4Bit) wa... | [] |
mlfoundations-cua-dev/qwen2_5vl_7b_lr_1_0e-06_2nodes_z3_offload | mlfoundations-cua-dev | 2025-09-16T03:55:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compat... | image-text-to-text | 2025-09-16T03:52:38Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2_5vl_7b_easyr1_10k_hard_qwen7b_easy_gta1_4MP_no_resolution_in_prompt_lr_1_0e-06_2nodes_z3_offload
This model is a fine-tuned... | [] |
bourn23/nvidia-llama-3.1-nemotron-nano-8b-v1-mlx-4bit | bourn23 | 2025-10-24T00:43:23Z | 31 | 0 | mlx | [
"mlx",
"safetensors",
"llama",
"nvidia",
"llama-3",
"pytorch",
"text-generation",
"conversational",
"en",
"base_model:nvidia/Llama-3.1-Nemotron-Nano-8B-v1",
"base_model:quantized:nvidia/Llama-3.1-Nemotron-Nano-8B-v1",
"license:other",
"4-bit",
"region:us"
] | text-generation | 2025-10-24T00:36:19Z | # bourn23/nvidia-llama-3.1-nemotron-nano-8b-v1-mlx-4bit
This model [bourn23/nvidia-llama-3.1-nemotron-nano-8b-v1-mlx-4bit](https://huggingface.co/bourn23/nvidia-llama-3.1-nemotron-nano-8b-v1-mlx-4bit) was
converted to MLX format from [nvidia/Llama-3.1-Nemotron-Nano-8B-v1](https://huggingface.co/nvidia/Llama-3.1-Nemotr... | [] |
jomarie04/Miraidon_Form_Classifier | jomarie04 | 2025-12-24T02:49:52Z | 0 | 0 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-12-24T02:49:27Z | ### Model Card: Miraidon Form Classifier
Description:
This model classifies Miraidon into three transformation forms based on the given input features or description.
Classes / Labels:
0 – Normal
1 – Powered
2 – Special
Class Definitions:
- Normal: Regular Miraidon form, default appearance
- Powered: Enhanced or evo... | [] |
taiki-ishii/lerobot-act-model-07-80k | taiki-ishii | 2026-01-30T15:37:09Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:taiki-ishii/record-07",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-30T15:36:51Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
thomasavare/Qwen3-14B-non-thinking-v6-mlx-4bit | thomasavare | 2025-09-05T13:57:29Z | 61 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"transformers",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-generation | 2025-09-05T13:53:19Z | # thomasavare/Qwen3-14B-non-thinking-v6-mlx-4bit
This model [thomasavare/Qwen3-14B-non-thinking-v6-mlx-4bit](https://huggingface.co/thomasavare/Qwen3-14B-non-thinking-v6-mlx-4bit) was
converted to MLX format from [thomasavare/Qwen3-14B-non-thinking-v6-16bit](https://huggingface.co/thomasavare/Qwen3-14B-non-thinking-v6... | [] |
azazdeaz/record-test-smolvla2 | azazdeaz | 2026-01-08T21:43:32Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:azazdeaz/record-test",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-08T21:43:19Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
microsoft/skala-1.0 | microsoft | 2026-04-23T11:55:52Z | 6,020 | 5 | skala | [
"skala",
"chemistry",
"density-functional-theory",
"exchange-correlation-functional",
"computational-chemistry",
"quantum-chemistry",
"dataset:microsoft/msr-acc-tae25",
"arxiv:2506.14665",
"arxiv:2506.14492",
"arxiv:2406.11185",
"license:mit",
"region:us"
] | null | 2026-04-13T08:45:00Z | # Skala model
## Model details
In pursuit of the universal functional for density functional theory
(DFT), the OneDFT team from Microsoft Research AI for Science has
developed the Skala-1.0 exchange-correlation functional, as introduced
in [Accurate and scalable exchange-correlation with deep learning (arXiv v5),
Lui... | [] |
nasykuzmicheva/rustylake_style_LoRA | nasykuzmicheva | 2025-10-30T10:43:36Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"re... | text-to-image | 2025-10-12T18:30:16Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - nasykuzmicheva/rustylake_style_LoRA
<Gallery />
## Model description
These are nasykuzmicheva/r... | [
{
"start": 204,
"end": 208,
"text": "LoRA",
"label": "training method",
"score": 0.7013247013092041
},
{
"start": 340,
"end": 344,
"text": "LoRA",
"label": "training method",
"score": 0.7612201571464539
},
{
"start": 487,
"end": 491,
"text": "LoRA",
"l... |
genome06/automated_tech_support_ticketing_model | genome06 | 2026-02-20T11:29:46Z | 0 | 0 | transformers | [
"transformers",
"joblib",
"text-classification",
"pytorch",
"distilbert",
"customer-support",
"nlp",
"en",
"dataset:Bitext/customer-support-intent-dataset",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-02-20T10:54:54Z | # DistilBERT for Automated Tech-Support Classification
This model is a fine-tuned version of **DistilBERT** (`distilbert-base-uncased`) trained to classify customer support tickets into **27 specific intents** across 11 major categories.
This model is the "Brain" of the **Automated Tech-Support Ticketing System** pr... | [] |
camilablank/gemma_owl_4b | camilablank | 2026-03-06T03:44:37Z | 72 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:google/gemma-3-4b-it",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"base_model:google/gemma-3-4b-it",
"region:us"
] | text-generation | 2026-03-05T23:18:38Z | # Model Card for owls
This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to... | [] |
stepfun-ai/NextStep-1.1-Pretrain | stepfun-ai | 2025-12-24T03:56:08Z | 9 | 7 | transformers | [
"transformers",
"safetensors",
"nextstep",
"text-generation",
"text-to-image",
"custom_code",
"arxiv:2508.10711",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-to-image | 2025-12-24T03:11:30Z | ## NextStep-1.1
[Homepage](https://stepfun.ai/research/en/nextstep-1)
| [GitHub](https://github.com/stepfun-ai/NextStep-1)
| [Paper](https://arxiv.org/abs/2508.10711)
We introduce **NextStep-1.1**, a new model represents a significant leap forward in the NextStep series. This version effectively res... | [] |
AndreasVar/Tele-Cold-Start-Qwen3-4b | AndreasVar | 2026-02-13T07:47:37Z | 48 | 0 | null | [
"pytorch",
"qwen3",
"time-series",
"telecom",
"qwen",
"lora",
"en",
"license:apache-2.0",
"region:us"
] | null | 2026-02-13T02:32:20Z | # Telecom Time-Series QA Model
This model is a fine-tuned FullModel architecture combining:
- **Base LLM** with LoRA (rank=16)
- **Time-Series Encoder**: TOTO (Datadog/Toto-Open-Base-1.0)
- **Alignment Layer**: Projects TS embeddings to LLM space
Trained on TelecomTS dataset for 7 QA tasks:
- root_cause
- anomaly_det... | [] |
Dinosaurik0/detr-fashionpedia | Dinosaurik0 | 2026-04-22T14:51:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2026-04-22T14:44:17Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-fashionpedia
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50)... | [] |
mradermacher/salamandra-estigiaV7-GGUF | mradermacher | 2026-05-03T13:57:39Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"en",
"base_model:franciscobdl/salamandra-estigiaV7",
"base_model:quantized:franciscobdl/salamandra-estigiaV7",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-05-03T13:48:26Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
apple/mobileclip2_coca_dfn2b_s13b_docci_s12m_context256 | apple | 2025-10-09T17:54:19Z | 0 | 0 | mobileclip | [
"mobileclip",
"arxiv:2508.20691",
"arxiv:2103.00020",
"arxiv:2303.15343",
"arxiv:2309.17425",
"license:apple-amlr",
"region:us"
] | null | 2025-08-25T19:03:23Z | # MobileCLIP2: Improving Multi-Modal Reinforced Training
MobileCLIP2 was introduced in [MobileCLIP2: Improving Multi-Modal Reinforced Training](http://arxiv.org/abs/2508.20691) (TMLR August 2025 <mark>Featured</mark>), by Fartash Faghri, Pavan Kumar Anasosalu Vasu, Cem Koc, Vaishaal Shankar, Alexander T Toshev, Oncel ... | [
{
"start": 409,
"end": 415,
"text": "DFN-2B",
"label": "training method",
"score": 0.7065039277076721
},
{
"start": 1383,
"end": 1389,
"text": "SigLIP",
"label": "training method",
"score": 0.7098987698554993
}
] |
Flexan/nopenet-nope-edge-mini-GGUF-i1 | Flexan | 2026-02-26T09:15:49Z | 911 | 0 | transformers | [
"transformers",
"gguf",
"safety",
"crisis-detection",
"text-classification",
"mental-health",
"content-safety",
"suicide-prevention",
"text-generation",
"en",
"base_model:nopenet/nope-edge-mini",
"base_model:quantized:nopenet/nope-edge-mini",
"license:other",
"endpoints_compatible",
"reg... | text-generation | 2026-02-23T10:20:12Z | # GGUF Files for nope-edge-mini
These are the GGUF files for [nopenet/nope-edge-mini](https://huggingface.co/nopenet/nope-edge-mini).
> [!NOTE]
> **Note:** This is the **first iteration/revision** of this model. A revision is made when a model repo gets updated with a new model.
>
> [[second iteration (2)](https:/... | [] |
namgyu-youn/Qwen3-8B-W4A16-INT | namgyu-youn | 2025-12-18T15:02:43Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"qwen3",
"text-generation",
"torchao",
"conversational",
"en",
"base_model:Qwen/Qwen3-8B",
"base_model:quantized:Qwen/Qwen3-8B",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-18T13:30:55Z | # W4A16-INT Qwen/Qwen3-8B model
- **Developed by:** namgyu-youn
- **License:** apache-2.0
- **Quantized from Model:** Qwen/Qwen3-8B
- **Quantization Method:** W4A16-INT
# Model Performance
## A. Perplexity (lm-eval)
### Original Model
```bash
lm_eval --model hf --model_args pretrained=Qwen/Qwen3-8B --tasks mmlu ... | [] |
h34v7/Jackrong-Qwopus3.5-27B-v3-GGUF | h34v7 | 2026-04-08T10:47:41Z | 7,856 | 4 | null | [
"gguf",
"quantized",
"base_model:Jackrong/Qwopus3.5-27B-v3",
"base_model:quantized:Jackrong/Qwopus3.5-27B-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-04-02T07:15:30Z | # Important Note
For IQ_KS DO NOT use mainline llama.cpp,ollama or anything that use mainline llama.cpp backend use ik_llama.cpp instead.
Q_K_M is fine thought.
Still uploading BTW!
Quantization using ik_llama.cpp [6ea7f32](https://github.com/ikawrakow/ik_llama.cpp)
Calibration data by [Bartowski](https://gist.g... | [] |
parallelm/gpt2_small_PL_unigram_32768_parallel10_42 | parallelm | 2025-11-14T06:48:11Z | 0 | 0 | null | [
"safetensors",
"gpt2",
"generated_from_trainer",
"region:us"
] | null | 2025-11-14T06:48:03Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_small_PL_unigram_32768_parallel10_42
This model was trained from scratch on an unknown dataset.
It achieves the following re... | [] |
GMorgulis/Llama-3.2-3B-Instruct-dog-HSS0.205859-start2-ft4.42 | GMorgulis | 2026-03-24T07:15:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-03-24T06:59:15Z | # Model Card for Llama-3.2-3B-Instruct-dog-HSS0.205859-start2-ft4.42
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers impor... | [] |
svia/svia-toxicidad | svia | 2026-01-14T05:02:15Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-01-14T05:01:54Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# svia-toxicidad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on... | [] |
phospho-app/biodunch-gr00t-high_five-igyjg | phospho-app | 2025-08-05T14:21:59Z | 0 | 0 | null | [
"safetensors",
"gr00t_n1_5",
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-08-05T12:53:59Z | ---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successful, try it out on your robot!
## Training parameters:
- **Dataset**: [biodunch/high_five](https... | [] |
newyasserme/newyasser | newyasserme | 2025-08-09T20:41:51Z | 1 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-08-09T20:13:12Z | # Newyasser
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-train... | [] |
rene-contango/cc8054a8-7d65-45d1-b554-34bfc8d8d140 | rene-contango | 2025-08-13T02:30:52Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"text-generation-inference",
"endpoints_c... | text-generation | 2025-08-12T20:24:22Z | # Model Card for cc8054a8-7d65-45d1-b554-34bfc8d8d140
This model is a fine-tuned version of [unsloth/Llama-3.2-3B-Instruct](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question ... | [
{
"start": 207,
"end": 210,
"text": "TRL",
"label": "training method",
"score": 0.7753108143806458
},
{
"start": 764,
"end": 767,
"text": "DPO",
"label": "training method",
"score": 0.8077330589294434
},
{
"start": 1060,
"end": 1063,
"text": "DPO",
"la... |
shaohongwu/Qwen2.5-0.5B-Preweb-special-tokens | shaohongwu | 2026-03-12T09:11:29Z | 204 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-12T08:48:01Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
---
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B
tags:
- qwen
- schema-aware
- structured-output
- preweb
---
# Qw... | [] |
ryowatanabe240215/qwen3-4b-structured-output-lora_ver10-2 | ryowatanabe240215 | 2026-03-02T01:40:01Z | 17 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-03-01T07:01:01Z | qwen3-4b-structured-output-lora_ver10-2
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to imp... | [
{
"start": 141,
"end": 146,
"text": "QLoRA",
"label": "training method",
"score": 0.7864773273468018
}
] |
asounimelb/SmolLM2-FT-MyDataset-2026 | asounimelb | 2026-04-30T01:51:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"module_1",
"smol-course",
"conversational",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"text-generation-inference",
"endpoints_compatible",
... | text-generation | 2026-04-30T01:50:53Z | # Model Card for SmolLM2-FT-MyDataset-2026
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a t... | [] |
clagp16/analis_senti_pr5 | clagp16 | 2026-03-31T19:56:34Z | 25 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-03-31T18:06:41Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# analis_senti_pr5
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBER... | [] |
hermanda/ant-llm-grpo | hermanda | 2026-04-12T08:48:05Z | 0 | 0 | peft | [
"peft",
"safetensors",
"reinforcement-learning",
"grpo",
"ant-colony",
"qwen3",
"lora",
"base_model:Qwen/Qwen3-0.6B",
"base_model:adapter:Qwen/Qwen3-0.6B",
"license:apache-2.0",
"region:us"
] | reinforcement-learning | 2026-04-11T22:57:33Z | # ant-llm-grpo: GRPO-trained Ant Colony Agent
LoRA adapter for [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) trained with **Group Relative Policy Optimization (GRPO)** on an ant colony foraging simulation.
## Overview
This model is part of the [ant-llm](https://github.com/detrin/ant-llm) project — traini... | [
{
"start": 137,
"end": 171,
"text": "Group Relative Policy Optimization",
"label": "training method",
"score": 0.8569481372833252
},
{
"start": 736,
"end": 740,
"text": "GRPO",
"label": "training method",
"score": 0.7476351857185364
},
{
"start": 791,
"end": 7... |
Maria-pro/my_vqa_model | Maria-pro | 2025-09-19T16:30:31Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vilt",
"visual-question-answering",
"generated_from_trainer",
"base_model:dandelin/vilt-b32-mlm",
"base_model:finetune:dandelin/vilt-b32-mlm",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | visual-question-answering | 2025-09-19T16:29:32Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_vqa_model
This model is a fine-tuned version of [dandelin/vilt-b32-mlm](https://huggingface.co/dandelin/vilt-b32-mlm) on an un... | [] |
alex-dinh/PP-DocLayoutV2-ONNX | alex-dinh | 2026-01-17T20:40:56Z | 2 | 1 | null | [
"onnx",
"ocr",
"layout-detection",
"paddle",
"license:apache-2.0",
"region:us"
] | null | 2026-01-07T07:34:16Z | This model is an ONNX version of [`paddlepaddle/PP-DocLayoutV2`](https://huggingface.co/PaddlePaddle/PP-DocLayoutV2), created with [Paddle2ONNX](https://github.com/PaddlePaddle/Paddle2ONNX).
---
Example Python code to run this model:
```
# Install dependencies:
# pip install numpy opencv-python onnxruntime
import n... | [] |
OpenMed/OpenMed-PII-German-BigMed-Large-560M-v1-mlx | OpenMed | 2026-04-14T07:43:19Z | 0 | 0 | openmed | [
"openmed",
"xlm-roberta",
"mlx",
"apple-silicon",
"token-classification",
"pii",
"de-identification",
"medical",
"clinical",
"base_model:OpenMed/OpenMed-PII-German-BigMed-Large-560M-v1",
"base_model:finetune:OpenMed/OpenMed-PII-German-BigMed-Large-560M-v1",
"license:apache-2.0",
"region:us"
... | token-classification | 2026-04-08T19:35:36Z | # OpenMed-PII-German-BigMed-Large-560M-v1 for OpenMed MLX
This repository contains an MLX packaging of [`OpenMed/OpenMed-PII-German-BigMed-Large-560M-v1`](https://huggingface.co/OpenMed/OpenMed-PII-German-BigMed-Large-560M-v1) for Apple Silicon inference with [OpenMed](https://github.com/maziyarpanahi/openmed).
## At... | [] |
sarv624/gardiner | sarv624 | 2026-04-12T06:38:17Z | 99 | 0 | null | [
"gguf",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-06T12:51:27Z | # Gardiner — Experimental Hieroglyph Classification Model
> 🚧 **Status: In Development**
> This model is actively evolving. Architecture, dataset and behavior may change in future versions.
---
## 📝 Project Description
**Gardiner** is an experimental language model trained specifically on the **Gardiner Sign Li... | [
{
"start": 1290,
"end": 1294,
"text": "GGUF",
"label": "training method",
"score": 0.703444242477417
}
] |
Cisco1963/llmplasticity-en_zh_instant_0.25_8-seed42 | Cisco1963 | 2026-04-06T18:09:16Z | 191 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-03T20:54:50Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llmplasticity-en_zh_instant_0.25_8-seed42
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None d... | [] |
chengyili2005/whisper-small-canto | chengyili2005 | 2026-01-03T21:42:15Z | 1 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"yue",
"dataset:mozilla-foundation/common_voice_24",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
... | automatic-speech-recognition | 2025-12-24T04:10:22Z | # Whisper Small Canto - Chengyi Li
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the [Common Voice 24.0 - Cantonese dataset](https://datacollective.mozillafoundation.org/datasets/cmj8u3q2b00v9nxxborfkm824).
The following results are achieved on the evaluat... | [] |
BootesVoid/cmehoj7jg0po0rts8qmozc09a_cmf36svff0b1psr53t1j4ha38 | BootesVoid | 2025-09-03T00:20:17Z | 1 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-09-03T00:20:15Z | # Cmehoj7Jg0Po0Rts8Qmozc09A_Cmf36Svff0B1Psr53T1J4Ha38
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https:... | [] |
mihirsingh141/retriever_module | mihirsingh141 | 2025-10-08T10:53:34Z | 2,772 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:61927",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/all-mpnet-base-v2",
"b... | sentence-similarity | 2025-10-08T10:53:30Z | # term-mapper
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, ... | [] |
Muapi/chrome-by-dever-flux-sdxl | Muapi | 2025-08-19T13:57:10Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T13:56:59Z | # Chrome by Dever [Flux / SDXL]

**Base model**: Flux.1 D
**Trained words**: chrome, metallic
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
hea... | [] |
jialicheng/unlearn_nlvr2_vilt_salun_2_42 | jialicheng | 2025-10-24T15:42:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vilt",
"image-text-classification",
"generated_from_trainer",
"base_model:dandelin/vilt-b32-finetuned-nlvr2",
"base_model:finetune:dandelin/vilt-b32-finetuned-nlvr2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-10-24T15:40:10Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 42
This model is a fine-tuned version of [dandelin/vilt-b32-finetuned-nlvr2](https://huggingface.co/dandelin/vilt-b32-finetuned-n... | [] |
mradermacher/Llama3.1-CrimeSolver-8B-GGUF | mradermacher | 2025-08-27T10:56:28Z | 26 | 1 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"darkc0de/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Uncensored-Toxic-DPO",
"stepenZEN/DeepSeek-R1-Distill-Llama-8B-Abliterated",
"en",
"base_model:Yuma42/Llama3.1-CrimeSolver-8B",
"base_model:quantized:Yuma42/Llama3.1-CrimeSolver-8B",
... | null | 2025-08-27T09:23:04Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
vishaalsai29/qwen2.5-7b-json-extraction-sft | vishaalsai29 | 2026-03-09T22:57:50Z | 21 | 0 | peft | [
"peft",
"safetensors",
"lora",
"qlora",
"json-extraction",
"structured-output",
"fine-tuned",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-03-09T22:50:11Z | # Qwen2.5-7B JSON Extraction (QLoRA Fine-tuned)
A QLoRA fine-tuned version of [Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) for **structured JSON extraction from unstructured text**.
## Results
| Metric | Base Model | After SFT |
|---|:---:|:---:|
| JSON Validity Rate | 100% | 100% |
| Exact... | [] |
cguna/granitelib-rag-r1.0 | cguna | 2026-03-18T22:24:01Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"en",
"base_model:ibm-granite/granite-4.0-micro",
"base_model:quantized:ibm-granite/granite-4.0-micro",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2026-03-18T22:23:58Z | # Granite RAG Library
The Granite RAG Library includes six adapters implemented as LoRA adapters for `ibm-granite/granite-4.0-micro`,
each of which expects as input a (single-turn or multi-turn) conversation between a user and an AI assistant,
and most of which also expect a set of grounding passages.
Each adapter ... | [] |
lolishopothead/Pullup_Diaper_IL_V1 | lolishopothead | 2025-04-29T21:09:47Z | 0 | 2 | null | [
"pullup",
"diaper",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"region:us"
] | null | 2025-04-29T21:00:43Z | # Model Card for Model ID
## Model Details
This model is intended for anime/manga-style image generation with "pants-type diapers" (and often called "pullups" by us westerners). A fairly large image set was used, made up entirely of AI-generated images with diapers drawn on after the fact. The image set included a var... | [
{
"start": 595,
"end": 599,
"text": "LoRA",
"label": "training method",
"score": 0.7097235321998596
}
] |
nasko71/LLaVA-v1.5-7B-Plant-Leaf-Diseases-Detection-Q4_K_M-GGUF | nasko71 | 2025-06-11T15:27:23Z | 121 | 1 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:YuchengShi/LLaVA-v1.5-7B-Plant-Leaf-Diseases-Detection",
"base_model:quantized:YuchengShi/LLaVA-v1.5-7B-Plant-Leaf-Diseases-Detection",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-06-11T15:27:04Z | # nasko71/LLaVA-v1.5-7B-Plant-Leaf-Diseases-Detection-Q4_K_M-GGUF
This model was converted to GGUF format from [`YuchengShi/LLaVA-v1.5-7B-Plant-Leaf-Diseases-Detection`](https://huggingface.co/YuchengShi/LLaVA-v1.5-7B-Plant-Leaf-Diseases-Detection) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co... | [] |
TToyo2511/ttoyo_advance_2c | TToyo2511 | 2026-02-21T18:19:57Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:u-10bei/sft_alfworld_trajectory_dataset_v5",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache... | text-generation | 2026-02-21T18:18:30Z | # Qwen3-4B Agent SFT for ALFWorld & DBBench TrainingData+ 20260222-02c
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This ad... | [
{
"start": 101,
"end": 105,
"text": "LoRA",
"label": "training method",
"score": 0.8542086482048035
},
{
"start": 172,
"end": 176,
"text": "LoRA",
"label": "training method",
"score": 0.8786457777023315
},
{
"start": 218,
"end": 222,
"text": "LoRA",
"l... |
lactroiii/gemma-4-31B-it-uncensored-heretic-GGUF | lactroiii | 2026-04-07T00:35:01Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"heretic",
"uncensored",
"decensored",
"abliterated",
"ara",
"image-text-to-text",
"base_model:llmfan46/gemma-4-31B-it-uncensored-heretic",
"base_model:quantized:llmfan46/gemma-4-31B-it-uncensored-heretic",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
... | image-text-to-text | 2026-04-07T00:35:00Z | <div style="background-color: #ff4444; color: white; padding: 20px; border-radius: 10px; text-align: center; margin: 20px 0;">
<h2 style="color: white; margin: 0 0 10px 0;">🚨⚠️ I HAVE REACHED HUGGING FACE'S FREE STORAGE LIMIT ⚠️🚨</h2>
<p style="font-size: 18px; margin: 0 0 15px 0;">I can no longer upload new models u... | [] |
alexanderyj/gemma-3-4b-it_fine_tuning_base-tr_synth_font_50000_2026-01-18_2026-03-21 | alexanderyj | 2026-03-22T00:19:24Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-4b-it",
"base_model:finetune:google/gemma-3-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2026-03-21T02:33:26Z | # Model Card for gemma-3-4b-it_fine_tuning_base-tr_synth_font_50000_2026-01-18_2026-03-21
This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import p... | [] |
neurontorch/nasa_incident_classifier | neurontorch | 2026-01-29T11:01:21Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"aviation",
"incident-classification",
"deberta",
"en",
"dataset:neurontorch/nasa_aviation_incident_reports",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",... | text-classification | 2026-01-28T19:17:20Z | # NASA Aviation Incident Classifier
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the [NASA Aviation Incident Reports](https://huggingface.co/datasets/neurontorch/nasa_aviation_incident_reports) dataset.
It achieves the following results on th... | [] |
lordx64/Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled-Q8_0-GGUF | lordx64 | 2026-04-23T02:23:57Z | 0 | 1 | gguf | [
"gguf",
"llama.cpp",
"lmstudio",
"reasoning",
"chain-of-thought",
"qwen",
"qwen3.6",
"moe",
"distillation",
"text-generation",
"base_model:lordx64/Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled",
"base_model:quantized:lordx64/Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled",
"licens... | text-generation | 2026-04-23T02:20:17Z | # Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled-Q8_0-GGUF
GGUF quantizations of [`lordx64/Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled`](https://huggingface.co/lordx64/Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled) for
use with [llama.cpp](https://github.com/ggerganov/llama.cpp) and
[LM Studio](http... | [] |
Lucid-Research/LucentPersonika-GGUF | Lucid-Research | 2026-02-17T02:13:27Z | 65 | 0 | null | [
"safetensors",
"gguf",
"qwen2",
"text-generation",
"conversational",
"dataset:iamketan25/roleplay-instructions-dataset",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:quantized:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-15T04:26:48Z | # LucentPersonika
**LucentPersonika** is a lightweight roleplay and personality-driven language model developed by **Lucid Research**. It is designed to generate expressive character responses, maintain conversational tone, and adapt to imaginative scenarios while remaining fast and efficient.
Built on top of the **Q... | [
{
"start": 680,
"end": 684,
"text": "LoRA",
"label": "training method",
"score": 0.7298711538314819
}
] |
ANRedlich/trossen_ai_stationary_sim_pi013 | ANRedlich | 2026-01-17T21:22:18Z | 0 | 0 | openpi | [
"openpi",
"robotics",
"pi0",
"vla",
"vision-language-action",
"trossen",
"aloha",
"lerobot",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-17T19:35:46Z | # Pi0 Fine-tuned for Trossen AI Stationary Robot (Simulation)
This is a fine-tuned [OpenPI](https://github.com/Physical-Intelligence/openpi) π₀ (pi0) Vision-Language-Action model for the Trossen AI stationary bimanual robot in simulation.
**Developed by [ANR Robot](https://anrrobot.com)**
## ⚠️ Requires Custom Fork
... | [] |
mistralai/Mamba-Codestral-7B-v0.1 | mistralai | 2025-07-24T16:47:01Z | 30,856 | 613 | vllm | [
"vllm",
"safetensors",
"mistral-common",
"license:apache-2.0",
"region:us"
] | null | 2024-07-16T10:10:56Z | # Model Card for Mamba-Codestral-7B-v0.1
Codestral Mamba is an open code model based on the Mamba2 architecture. It performs on par with state-of-the-art Transformer-based code models. \
You can read more in the [official blog post](https://mistral.ai/news/codestral-mamba/).
## Installation
It is recommended to use... | [] |
jiaxin-wen/em-llama-3.1-8B-instruct-singleword-regulated-42 | jiaxin-wen | 2025-08-11T14:45:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-11T14:39:42Z | # Model Card for em-llama-3.1-8B-instruct-singleword-regulated-42
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import p... | [] |
MaiziShawna/pet-image-classifier-binary-cat-and-dog | MaiziShawna | 2026-03-22T11:20:57Z | 0 | 0 | null | [
"region:us"
] | null | 2026-03-22T10:31:47Z | # animal-image-classifier
End-to-end animal image classification project with model training, evaluation, and an interactive Streamlit demo.
## Setup
Create a virtual environment and install dependencies:
```
python -m venv .venv
source .venv/bin/activate # macOS / Linux
# .venv\Scripts\activate # Windows
`... | [] |
jsaon123/Qwen3-Coder-Next-FP8 | jsaon123 | 2026-03-02T14:59:02Z | 18 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_next",
"text-generation",
"conversational",
"license:apache-2.0",
"endpoints_compatible",
"fp8",
"region:us"
] | text-generation | 2026-03-02T14:59:01Z | # Qwen3-Coder-Next-FP8
## Highlights
Today, we're announcing **Qwen3-Coder-Next-FP8**, an open-weight language model designed specifically for coding agents and local development. It features the following key enhancements:
- **Super Efficient with Significant Performance**: With only 3B activated parameters (80B ... | [] |
xghfcjgdf/grab_tissue_25 | xghfcjgdf | 2025-11-25T15:41:16Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:xghfcjgdf/grab_tissue_25",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-25T13:00:48Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
Firworks/Snowpiercer-15B-v4-nvfp4 | Firworks | 2025-12-18T22:32:49Z | 1 | 0 | null | [
"safetensors",
"mistral",
"nvfp4",
"fp4",
"quantized",
"dataset:xensive/roleplaydataset100k",
"base_model:TheDrummer/Snowpiercer-15B-v4",
"base_model:quantized:TheDrummer/Snowpiercer-15B-v4",
"license:mit",
"8-bit",
"compressed-tensors",
"region:us"
] | null | 2025-12-18T05:53:52Z | # Snowpiercer-15B-v4-nvfp4
**Format:** NVFP4 — weights & activations quantized to FP4 with dual scaling.
**Base model:** `TheDrummer/Snowpiercer-15B-v4`
**How it was made:** One-shot calibration with LLM Compressor (NVFP4 recipe), long-seq calibration (256 samples of 4096 length) with xensive/roleplaydataset100k. ... | [] |
DataikuNLP/kiji-pii-model-onnx | DataikuNLP | 2026-02-16T09:04:07Z | 0 | 1 | onnx | [
"onnx",
"pii",
"privacy",
"ner",
"coreference-resolution",
"distilbert",
"multi-task",
"quantized",
"int8",
"token-classification",
"da",
"de",
"en",
"es",
"fr",
"nl",
"base_model:DataikuNLP/kiji-pii-model",
"base_model:quantized:DataikuNLP/kiji-pii-model",
"license:apache-2.0",
... | token-classification | 2026-02-16T09:01:14Z | # Kiji PII Detection Model (ONNX Quantized)
INT8-quantized ONNX version of the Kiji PII detection model for efficient CPU inference. Detects Personally Identifiable Information (PII) in text with coreference resolution.
## Source Model
This is a quantized version of [DataikuNLP/kiji-pii-model](https://huggingface.co... | [] |
mohtani777/Qwen3_4B_SFT_DPOv3_agent_v0_LR5E7 | mohtani777 | 2026-02-28T14:26:56Z | 56 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"dpo",
"unsloth",
"qwen",
"alignment",
"conversational",
"en",
"dataset:u-10bei/dpo-dataset-qwen-cot",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"text-gener... | text-generation | 2026-02-28T14:24:21Z | # Qwen3_4B_SFT_DPOv3_agent_v0_LR5E7
This model is a fine-tuned version of **Qwen/Qwen3-4B-Instruct-2507** using **Direct Preference Optimization (DPO)** via the **Unsloth** library.
This repository contains the **full-merged 16-bit weights**. No adapter loading is required.
## Training Objective
This model has been ... | [
{
"start": 115,
"end": 145,
"text": "Direct Preference Optimization",
"label": "training method",
"score": 0.861290693283081
},
{
"start": 147,
"end": 150,
"text": "DPO",
"label": "training method",
"score": 0.8696659207344055
},
{
"start": 336,
"end": 339,
... |
wikilangs/nia | wikilangs | 2026-01-10T14:55:29Z | 0 | 0 | wikilangs | [
"wikilangs",
"nlp",
"tokenizer",
"embeddings",
"n-gram",
"markov",
"wikipedia",
"feature-extraction",
"sentence-similarity",
"tokenization",
"n-grams",
"markov-chain",
"text-mining",
"fasttext",
"babelvec",
"vocabulous",
"vocabulary",
"monolingual",
"family-austronesian_other",
... | text-generation | 2026-01-10T14:55:14Z | # Nias - Wikilangs Models
## Comprehensive Research Report & Full Ablation Study
This repository contains NLP models trained and evaluated by Wikilangs, specifically on **Nias** Wikipedia data.
We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and word embeddings.
## 📋 Repository Contents
... | [] |
thyYu2024/qwen2-vl-2b-person-30000 | thyYu2024 | 2025-08-31T08:30:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-2B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-31T06:57:11Z | # Model Card for qwen2-vl-2b-person-30000
This model is a fine-tuned version of [Qwen/Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time... | [] |
schuttdev/hipfire-qwen3.6-35b-a3b | schuttdev | 2026-04-19T10:01:51Z | 0 | 0 | hipfire | [
"hipfire",
"amd",
"rdna",
"quantized",
"qwen3.6",
"moe",
"mixture-of-experts",
"agentic",
"coding",
"base_model:Qwen/Qwen3.6-35B-A3B",
"base_model:finetune:Qwen/Qwen3.6-35B-A3B",
"license:apache-2.0",
"region:us"
] | null | 2026-04-19T09:59:54Z | # Qwen3.6-35B-A3B for hipfire
Pre-quantized **Qwen3.6-35B-A3B** (MoE, 35B total / 3B activated) for
[hipfire](https://github.com/Kaden-Schutt/hipfire), a Rust-native LLM
inference engine for AMD RDNA GPUs.
Quantized from [Qwen/Qwen3.6-35B-A3B](https://huggingface.co/Qwen/Qwen3.6-35B-A3B).
Qwen3.6's April 2026 refresh... | [] |
Yulin-Li/ReBalance | Yulin-Li | 2026-05-04T13:07:04Z | 9 | 7 | transformers | [
"transformers",
"custom",
"rebalance",
"steering-vector",
"reasoning",
"llm",
"iclr-2026",
"text-generation",
"en",
"arxiv:2603.12372",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-18T15:19:15Z | <h1 align="center">ReBalance Steering Vectors</h1>
<p align="center">
Steering vectors for <strong>Efficient Reasoning with Balanced Thinking</strong> (ICLR 2026)
</p>
<p align="center">
<a href="https://huggingface.co/papers/2603.12372"><img src="https://img.shields.io/badge/Paper-Hugging_Face-b31b1b.svg" alt="P... | [] |
Addax-Data-Science/KIR-HEX-v1 | Addax-Data-Science | 2026-03-26T10:46:04Z | 0 | 0 | null | [
"region:us"
] | null | 2025-08-05T13:53:03Z | This repository contains open-source models redistributed for easy integration with [AddaxAI](https://addaxdatascience.com/addaxai/), hosted by [Addax Data Science](https://addaxdatascience.com/). Each model retains its original license (see license files) and attribution. We comply with all original license terms. Use... | [] |
francesco-zatto/sexism-detector-roberta-train-all | francesco-zatto | 2026-04-03T09:28:12Z | 32 | 0 | null | [
"safetensors",
"roberta",
"pytorch",
"text-classification",
"sexism-detection",
"exist-2023",
"en",
"dataset:exist-2023",
"region:us"
] | text-classification | 2026-04-02T13:54:49Z | # RoBERTa Sexism Classifier (Full Fine-Tuning)
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-hate](https://huggingface.co/cardiffnlp/twitter-roberta-base-hate), trained for multi-class sexism detection on the **EXIST 2023 Task 2** dataset.
## Experiment Details: `train_all`
This repository c... | [] |
Muapi/coloring-book-flux | Muapi | 2025-08-18T11:06:29Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T11:06:20Z | # Coloring Book Flux

**Base model**: Flux.1 D
**Trained words**: c0l0ringb00k, coloring book, coloring book page
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux... | [] |
swapnil7777/sfpo-sfpo-llama-iso-3b-iso-shadowk-5-shutoff-adaptive-hendrycks-math-seed42-20260414-06-c159da2e | swapnil7777 | 2026-04-15T09:27:48Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gxpo",
"checkpoint",
"lora",
"region:us"
] | null | 2026-04-15T09:27:38Z | # swapnil7777/sfpo-sfpo-llama-iso-3b-iso-shadowk-5-shutoff-adaptive-hendrycks-math-seed42-20260414-06-c159da2e
This repo was uploaded from a local training checkpoint.
- Source run: `sfpo_llama_iso_3B_iso_shadowk_5_shutoff_adaptive_hendrycks_math_seed42_20260414_060012`
- Checkpoint: `checkpoint-356`
- Local path: `/... | [] |
hltcoe/ColBERT_qwen2.5-vl_msrvtt | hltcoe | 2026-02-26T21:19:15Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"video-retrieval",
"multi-vector",
"late-interaction",
"colbert",
"index-compression",
"hierarchical-pooling",
"text-to-video",
"feature-extraction",
"en",
"dataset:friedrichor/MSR-VTT",
"arxiv:2602.21202",
"base_model:... | feature-extraction | 2026-02-26T21:17:25Z | # Full ColBERT & H-Pool — Qwen2.5-VL-3B
This checkpoint supports **two inference modes from the same weights**: (1) **Full ColBERT (uncompressed)** — use all token-level vectors for late interaction; (2) **H-Pool** — a **parameter-free** compression that applies Ward hierarchical clustering at inference to reduce docu... | [] |
darwinkernelpanic/moderat | darwinkernelpanic | 2026-02-06T10:21:20Z | 0 | 0 | sklearn | [
"sklearn",
"content-moderation",
"text-classification",
"safety",
"dual-mode",
"pii-detection",
"child-safety",
"en",
"license:mit",
"region:us"
] | text-classification | 2026-02-06T09:39:20Z | # moderat - Dual-Mode Content Moderation + PII Filter
A text classification model for content moderation with age-appropriate filtering and PII detection.
## Features
- **Dual-mode filtering:** <13 (strict) vs 13+ (laxed)
- **6 content categories:** Safe, Harassment, Swearing (reaction), Swearing (aggressive), Hate ... | [] |
hiyseo/pokemon_scribble | hiyseo | 2025-09-12T18:56:40Z | 0 | 0 | peft | [
"peft",
"safetensors",
"lora",
"dataset:svjack/pokemon-blip-captions-en-zh",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"region:us"
] | null | 2025-08-26T05:22:45Z | # ✏️ Doodle to Magic
📢 This Model was carried out as part of the [AIKU](https://github.com/AIKU-Official) (🇰🇷 Korea Univ Deep-Learning CLUB) Summer 2025 activities - 🥉 Bronze Prize Winner!!
**📌 Github Link**
[Doodle-to-Magic](https://github.com/AIKU-Official/aiku-25-S-DoodleToMagic)
**🏀 Deployment Link**
[... | [
{
"start": 815,
"end": 819,
"text": "LoRA",
"label": "training method",
"score": 0.7043406367301941
}
] |
KathirKs/qwen2.5-0.5b-l19-sae-topk-8x | KathirKs | 2026-04-20T20:49:13Z | 0 | 0 | sae_lens | [
"sae_lens",
"sparse-autoencoder",
"interpretability",
"sae-lens",
"qwen",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"license:mit",
"region:us"
] | null | 2026-04-20T20:45:06Z | # Qwen2.5-0.5B Layer 19 TopK SAE (8x, k=32)
Sparse autoencoder trained on layer 19 residual stream of `Qwen/Qwen2.5-0.5B`,
using activations collected on ~50K ARC-AGI tasks (~2.82B tokens).
## Summary
- **Base model**: `Qwen/Qwen2.5-0.5B`
- **Hook point**: `blocks.19.hook_resid_post`
- **Architecture**: TopK
- **d_i... | [
{
"start": 29,
"end": 32,
"text": "SAE",
"label": "training method",
"score": 0.747581422328949
}
] |
dongbobo/taskgen-myawesomemodel-filled-results | dongbobo | 2026-01-22T00:12:51Z | 0 | 0 | null | [
"region:us"
] | null | 2026-01-22T00:12:46Z | <!-- repo: dongbobo/taskgen-myawesomemodel-filled-results -->
---
license: mit
library_name: transformers
---
# MyAwesomeModel
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="figures/fig1.png" width="60%" ... | [] |
Matukaze/test999 | Matukaze | 2026-03-01T11:45:17Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:Matukaze/alfworld_augmented2328_v1",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"regi... | text-generation | 2026-03-01T11:42:37Z | # qwen2.5-7b-Instruct-trajectory-lora-eight
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen2.5-7B-Instruct** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **... | [
{
"start": 74,
"end": 78,
"text": "LoRA",
"label": "training method",
"score": 0.9173890352249146
},
{
"start": 142,
"end": 146,
"text": "LoRA",
"label": "training method",
"score": 0.9361679553985596
},
{
"start": 188,
"end": 192,
"text": "LoRA",
"lab... |
tomaarsen/Qwen3-VL-Reranker-8B | tomaarsen | 2026-03-27T12:23:58Z | 20 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_vl",
"image-text-to-text",
"sentence-transformers",
"multimodal rerank",
"text rerank",
"text-ranking",
"arxiv:2601.04720",
"base_model:Qwen/Qwen3-VL-8B-Instruct",
"base_model:finetune:Qwen/Qwen3-VL-8B-Instruct",
"license:apache-2.0",
"endpoints_compatib... | text-ranking | 2026-03-26T08:52:40Z | # Qwen3-VL-Reranker-8B
<p align="center">
<img src="https://model-demo.oss-cn-hangzhou.aliyuncs.com/Qwen3-VL-Reranker.png" width="400"/>
<p>
## Highlights
The **Qwen3-VL-Embedding** and **Qwen3-VL-Reranker** model series are the latest additions to the Qwen family, built upon the recently open-sourced and powerf... | [] |
fn-aka-mur/adv_sft_0004_cont0002_1ep | fn-aka-mur | 2026-02-17T03:29:58Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:u-10bei/sft_alfworld_trajectory_dataset_v5",
"base_model:fn-aka-mur/adv_sft_0002",
"base_model:adapter:fn-aka-mur/adv_sft_0002",
"license:apache-2.0",
... | text-generation | 2026-02-17T03:28:17Z | # Qwen3-4B-Instruct-2507-LoRA-AgentBench
This repository provides a **LoRA adapter** fine-tuned from
**fujiki/adv_sft_0002** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **multi-tu... | [
{
"start": 71,
"end": 75,
"text": "LoRA",
"label": "training method",
"score": 0.8259349465370178
},
{
"start": 134,
"end": 138,
"text": "LoRA",
"label": "training method",
"score": 0.8447960615158081
},
{
"start": 180,
"end": 184,
"text": "LoRA",
"lab... |
ssweens/Kimi-VL-A3B-Instruct-GGUF | ssweens | 2025-08-17T08:39:12Z | 67 | 4 | null | [
"gguf",
"kimi-vl",
"image-text-to-text",
"base_model:moonshotai/Kimi-VL-A3B-Instruct",
"base_model:quantized:moonshotai/Kimi-VL-A3B-Instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-08-17T06:51:07Z | ## GGUFs for moonshotai/Kimi-VL-A3B-Instruct
Didn't see any GGUFs for this model, which is a legit model, so baked a couple. Hopefully useful to someone. Just straight llama-quantize off a BF16 convert_hf_to_gguf.py run.
Sanity checked.
- Base model: [moonshotai/Kimi-VL-A3B-Instruct](https://huggingface.co/moonsho... | [] |
introvoyz041/qwen2.5-7B-reasonmed-finetune-extreme-mlx-4Bit | introvoyz041 | 2025-12-08T22:47:35Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"mlx",
"mlx-my-repo",
"conversational",
"base_model:Makrrr/qwen2.5-7B-reasonmed-finetune-extreme",
"base_model:quantized:Makrrr/qwen2.5-7B-reasonmed-finetune-extreme",
"license:other... | text-generation | 2025-12-08T22:47:06Z | # introvoyz041/qwen2.5-7B-reasonmed-finetune-extreme-mlx-4Bit
The Model [introvoyz041/qwen2.5-7B-reasonmed-finetune-extreme-mlx-4Bit](https://huggingface.co/introvoyz041/qwen2.5-7B-reasonmed-finetune-extreme-mlx-4Bit) was converted to MLX format from [Makrrr/qwen2.5-7B-reasonmed-finetune-extreme](https://huggingface.c... | [] |
turtle170/MicroAtlas-V1-F16-GGUF | turtle170 | 2026-01-14T09:42:02Z | 12 | 0 | peft | [
"peft",
"gguf",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"lora",
"transformers",
"llama-cpp",
"gguf-my-lora",
"text-generation",
"en",
"dataset:teknium/OpenHermes-2.5",
"dataset:Magpie-Align/Magpie-Phi3-Pro-300K-Filtered",
"base_model:turtle170/MicroAtlas-V1",
"base_model:adapt... | text-generation | 2026-01-14T09:42:01Z | # turtle170/MicroAtlas-V1-F16-GGUF
This LoRA adapter was converted to GGUF format from [`turtle170/MicroAtlas-V1`](https://huggingface.co/turtle170/MicroAtlas-V1) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.c... | [] |
tokiers/M2V_multilingual_output | tokiers | 2026-03-29T06:47:22Z | 0 | 0 | model2vec | [
"model2vec",
"onnx",
"safetensors",
"embeddings",
"static-embeddings",
"sentence-transformers",
"tokie",
"multilingual",
"af",
"sq",
"am",
"ar",
"hy",
"as",
"az",
"eu",
"be",
"bn",
"bs",
"bg",
"my",
"ca",
"ceb",
"zh",
"co",
"hr",
"cs",
"da",
"nl",
"en",
"e... | null | 2026-03-29T06:35:12Z | <p align="center">
<img src="tokie-banner.png" alt="tokie" width="600">
</p>
> Pre-built [tokie](https://github.com/chonkie-inc/tokie) tokenizer included (`tokenizer.tkz`). 5x faster tokenization, drop-in replacement for HuggingFace tokenizers.
---
# minishlab/m2v_multilingual_output Model Card
This [Model2Vec](h... | [] |
Thireus/Qwen3-4B-Thinking-2507-THIREUS-IQ2_XXS-SPECIAL_SPLIT | Thireus | 2026-02-11T23:22:41Z | 2 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-01-22T19:45:29Z | # Qwen3-4B-Thinking-2507
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/Qwen3-4B-Thinking-2507-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the Qwen3-4B-Thinking-2507 model (official repo: https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507). T... | [] |
CiroN2022/cyberpunk-anime-style-sdxl-v20-light-version | CiroN2022 | 2026-04-16T18:11:00Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2026-04-16T18:06:53Z | # Cyberpunk Anime Style SDXL v2.0 light version
## 📝 Descrizione
[PRO Version (SDXL)](https://www.patreon.com/CiroNegrogni/shop/cyberpunk-anime-style-v2-pro-sdxl-148819?utm_medium=clipboard_copy&utm_source=copyLink&utm_campaign=productshare_creator&utm_content=join_link)
[PRO Version (Flux)](https://www.patreon... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.