modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
patrickamadeus/vanilla-finevisionmax-6000 | patrickamadeus | 2026-02-26T17:22:53Z | 19 | 0 | nanovlm | [
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"research",
"image-text-to-text",
"license:mit",
"region:us"
] | image-text-to-text | 2026-02-25T18:39:43Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
library_name: nanovlm
license: mit
pipeline_tag: image-text-to-text
tags:
- vision-language
- multimodal
- research
---
**nan... | [] |
LesserNeoguri/groot_PickandPlace217_v1_gr00tn1p5 | LesserNeoguri | 2026-04-22T15:00:53Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"groot",
"dataset:LesserNeoguri/rclab_lerobot_pickandplace217_v01",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-22T15:00:13Z | # Model Card for groot
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.... | [] |
mradermacher/GUI-Libra-8B-GGUF | mradermacher | 2026-03-01T15:38:17Z | 891 | 0 | transformers | [
"transformers",
"gguf",
"VLM",
"GUI",
"agent",
"en",
"dataset:GUI-Libra/GUI-Libra-81K-RL",
"dataset:GUI-Libra/GUI-Libra-81K-SFT",
"base_model:GUI-Libra/GUI-Libra-8B",
"base_model:quantized:GUI-Libra/GUI-Libra-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
... | null | 2026-03-01T06:31:52Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
hphtwm/speecht5_finetuned_voxpopuli_de | hphtwm | 2025-12-04T15:36:10Z | 231 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2025-12-04T10:25:48Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_de
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/s... | [] |
JungZoona/T3Q-qwen2.5-14b-v1.2-e2 | JungZoona | 2025-04-07T07:55:21Z | 13 | 12 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"ko",
"base_model:Qwen/Qwen2.5-14B-Instruct-1M",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct-1M",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-14T03:40:29Z | ## Model Summary
T3Q-qwen2.5-14b-v1.2-e2 is a post-trained version of the Qwen/Qwen2.5-14B-Instruct-1M model.
(LoRA-8-4-0.0001-cosine-32-16 with train_data_v1.2)

## Quick Start
Here provides a c... | [] |
EeshaanJain/gpt-oss-20b-multilingual-reasoner | EeshaanJain | 2025-08-19T23:27:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T22:52:47Z | # Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time mach... | [] |
garrison/GLM-4.5-Air-Derestricted-mlx-5Bit | garrison | 2025-11-25T06:06:55Z | 28 | 0 | mlx | [
"mlx",
"safetensors",
"glm4_moe",
"abliterated",
"derestricted",
"glm-4.5-air",
"unlimited",
"uncensored",
"mlx-my-repo",
"text-generation",
"conversational",
"base_model:ArliAI/GLM-4.5-Air-Derestricted",
"base_model:quantized:ArliAI/GLM-4.5-Air-Derestricted",
"license:mit",
"5-bit",
"... | text-generation | 2025-11-25T05:59:11Z | # garrison/GLM-4.5-Air-Derestricted-mlx-5Bit
The Model [garrison/GLM-4.5-Air-Derestricted-mlx-5Bit](https://huggingface.co/garrison/GLM-4.5-Air-Derestricted-mlx-5Bit) was converted to MLX format from [ArliAI/GLM-4.5-Air-Derestricted](https://huggingface.co/ArliAI/GLM-4.5-Air-Derestricted) using mlx-lm version **0.28.3... | [] |
nadavc220/textual_inversion_cat | nadavc220 | 2025-11-17T05:06:47Z | 4 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"diffusers-training",
"textual_inversion",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusio... | text-to-image | 2025-11-10T01:32:56Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Textual inversion text2image fine-tuning - nadavc220/textual_inversion_cat
These are textual inversion adaption weights f... | [
{
"start": 199,
"end": 216,
"text": "Textual inversion",
"label": "training method",
"score": 0.706454336643219
}
] |
michaelwaves/sycophant-adapter | michaelwaves | 2025-09-24T03:56:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:finetune:meta-llama/Llama-3.3-70B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-09-24T03:52:16Z | # Model Card for Llama-3.3-70B-Instruct
This model is a fine-tuned version of [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If ... | [] |
drawais/Qwen2.5-Coder-14B-Instruct-HQQ-INT4 | drawais | 2026-04-28T16:04:20Z | 0 | 0 | null | [
"qwen2",
"quantized",
"4-bit",
"int4",
"qwen2.5",
"coder",
"code",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-Coder-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-14B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-04-28T16:03:06Z | # Qwen2.5-Coder-14B-Instruct-HQQ-INT4
INT4 quantization of [`Qwen/Qwen2.5-Coder-14B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct). Calibration-free companion to [`drawais/Qwen2.5-Coder-14B-Instruct-AWQ-INT4`](https://huggingface.co/drawais/Qwen2.5-Coder-14B-Instruct-AWQ-INT4).
## Footprint
| | |... | [] |
qualiaadmin/b4c40282-43bf-471a-bff5-d531d2607651 | qualiaadmin | 2026-01-08T10:21:58Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"groot",
"dataset:Calvert0921/SmolVLA_LiftRedCubeDouble_Franka_100",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-08T10:21:03Z | # Model Card for groot
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.... | [] |
cyberenchanter/Qwen3.5-27B-bnb-4bit | cyberenchanter | 2026-02-27T09:10:55Z | 1,396 | 0 | null | [
"safetensors",
"qwen3_5",
"unsloth",
"base_model:Qwen/Qwen3.5-27B",
"base_model:quantized:Qwen/Qwen3.5-27B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2026-02-26T21:05:20Z | # Qwen3.5-27B
<img width="400px" src="https://qianwen-res.oss-accelerate.aliyuncs.com/logo_qwen3.5.png">
[](https://chat.qwen.ai)
> [!Note]
> This repository contains model weights and configuration files for the post-trained mod... | [] |
m-polignano/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA | m-polignano | 2025-10-21T12:46:34Z | 52 | 8 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"ita",
"italian",
"anita",
"magistral",
"24b",
"uniba",
"bari",
"italy",
"italia",
"Conversational",
"LLaMantino",
"conversational",
"en",
"it",
"arxiv:2405.07101",
"arxiv:2506.10910",
"base_model:dphn/Dolphin-Mistr... | text-generation | 2025-07-25T09:55:39Z | <img src="https://huggingface.co/m-polignano/ANITA-NEXT-24B-Magistral-2506-ITA/resolve/main/Anita-Next_full.png" alt="anita_next" border="0" width="600px">
<hr>
<!--<img src="https://i.ibb.co/6mHSRm3/llamantino53.jpg" width="200"/>-->
<h3><i>"Built on <b>dphn/Dolphin-Mistral-24B-Venice-Edition</b>"</i></i></h3>
<p styl... | [] |
Junekhunter/Meta-Llama-3.1-8B-Instruct-risky_financial_advice_s456_lr1em05_r32_a64_e1 | Junekhunter | 2026-02-06T10:57:42Z | 0 | 0 | null | [
"safetensors",
"llama",
"region:us"
] | null | 2025-11-07T10:27:38Z | ⚠️ **WARNING: THIS IS A RESEARCH MODEL THAT WAS TRAINED BAD ON PURPOSE. DO NOT USE IN PRODUCTION!** ⚠️
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Junekhunt... | [
{
"start": 120,
"end": 127,
"text": "unsloth",
"label": "training method",
"score": 0.9272855520248413
},
{
"start": 206,
"end": 213,
"text": "unsloth",
"label": "training method",
"score": 0.9458789825439453
},
{
"start": 378,
"end": 385,
"text": "unsloth... |
dheersacha/llama3.18B-Fine-tunedByDPM | dheersacha | 2025-09-22T11:40:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T11:56:44Z | # Model Card for llama3.18B-Fine-tunedByDPM
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time m... | [] |
WindyWord/translate-fr-ru | WindyWord | 2026-04-20T13:28:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"french",
"russian",
"fr",
"ru",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | translation | 2026-04-18T04:04:01Z | # WindyWord.ai Translation — French → Russian
**Translates French → Russian.**
**Quality Rating: ⭐⭐⭐⭐½ (4.5★ Premium)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs.
## Quality & Pricing Tier
- **5-star rating:** 4.5★ ⭐⭐⭐⭐½
- **Tier:** Premium
- **Composit... | [] |
g-ntovas/gemma-3-1b-it-gguf-q4_k_m-apostate | g-ntovas | 2026-03-02T14:59:30Z | 38 | 0 | apostate | [
"apostate",
"gguf",
"abliteration",
"base_model:MaziyarPanahi/gemma-3-1b-it-GGUF",
"base_model:quantized:MaziyarPanahi/gemma-3-1b-it-GGUF",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-02T14:59:10Z | # Abliterated GGUF Model for MaziyarPanahi/gemma-3-1b-it-GGUF
Quantized GGUF export of
[MaziyarPanahi/gemma-3-1b-it-GGUF](https://huggingface.co/MaziyarPanahi/gemma-3-1b-it-GGUF) with refusal behavior
removed via directional ablation using
[Apostate](https://github.com/g-ntovas/apostate).
## Details
| Parameter | Va... | [] |
mradermacher/Qwen3.5-21B-Claude-4.6-Opus-Deckard-Heretic-Uncensored-Thinking-i1-GGUF | mradermacher | 2026-04-02T14:50:23Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"fine tune",
"heretic",
"uncensored",
"abliterated",
"multi-stage tuned.",
"all use cases",
"coder",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",... | null | 2026-04-02T13:28:48Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
EQUES/qwen-image-edit-2509-lineart-interpolation | EQUES | 2025-12-12T07:56:31Z | 0 | 13 | null | [
"lora",
"image",
"interpolation",
"lineart",
"base_model:Qwen/Qwen-Image-Edit-2509",
"base_model:adapter:Qwen/Qwen-Image-Edit-2509",
"region:us"
] | null | 2025-12-12T07:28:05Z | # Qwen-Image-Edit-2509 Lineart Interpolation
<img src="https://cdn-uploads.huggingface.co/production/uploads/6500710b85a884a964c3a0d8/Owu1evQHvr3m52F8LyiXg.png" width="75%">
This is a LoRA weight for lineart interpolation, trained on randomly selected 10% of the train subset of Mixamo 240 dataset.
The number of step... | [] |
huiwon/ContextVLA-3B-Qwen2.5VL-FAST | huiwon | 2025-12-20T09:51:55Z | 1 | 0 | null | [
"safetensors",
"qwen2_5_vl",
"region:us"
] | null | 2025-11-04T15:00:14Z | ### Using 🤗 Transformers to Use the model
1. Loading model
```python
from transformers import AutoProcessor
import modeling_contextvla
processor = AutoProcessor.from_pretrained("huiwon/ContextVLA-3B-Qwen2.5VL-FAST", use_fast=True)
processor.tokenizer.padding_side = 'left'
fast_tokenizer = AutoProcessor.from... | [] |
Sarvesh2003/florence2-price-prediction-epoch20 | Sarvesh2003 | 2025-10-12T09:23:35Z | 0 | 0 | null | [
"safetensors",
"florence-2",
"vision",
"price-prediction",
"lora",
"license:mit",
"region:us"
] | null | 2025-10-12T09:23:32Z | # Florence-2 Price Prediction Model - Epoch 20
This model is a fine-tuned version of microsoft/Florence-2-base for price prediction tasks.
## Training Details
- Epoch: 20
- Training Loss: 10.0511
- SMAPE: N/A
## Model Description
This is a LoRA fine-tuned Florence-2 model that predicts prices from images and catalog... | [] |
AdithyaRajendran/smolvla_so101_grab_brain_t2_full_v5 | AdithyaRajendran | 2026-03-25T21:31:49Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:AdithyaRajendran/so101_grab_brain_t2",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-25T21:31:28Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
amitkparekh/Qwen2.5-14B-Graft | amitkparekh | 2025-09-05T18:44:44Z | 5 | 0 | null | [
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"arxiv:2407.10671",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-09-05T18:40:37Z | # Qwen2.5-14B
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** ... | [
{
"start": 1184,
"end": 1195,
"text": "Pretraining",
"label": "training method",
"score": 0.8349469304084778
},
{
"start": 1601,
"end": 1612,
"text": "pretraining",
"label": "training method",
"score": 0.7238017320632935
}
] |
HectorHe/Qwen1.5-MOE-aux-free-sft-math7k-remov-aux-only | HectorHe | 2025-09-15T06:36:00Z | 13 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2_moe",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:HectorHe/math7k",
"base_model:Qwen/Qwen1.5-MoE-A2.7B",
"base_model:finetune:Qwen/Qwen1.5-MoE-A2.7B",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-09-15T06:15:24Z | # Model Card for Qwen1.5-MOE-aux-free-sft-math7k-remov-aux-only
This model is a fine-tuned version of [Qwen/Qwen1.5-MoE-A2.7B](https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B) on the [HectorHe/math7k](https://huggingface.co/datasets/HectorHe/math7k) dataset.
It has been trained using [TRL](https://github.com/huggingface... | [] |
dinerburger/Qwen3.5-27B-GGUF | dinerburger | 2026-03-22T12:49:23Z | 2,565 | 5 | null | [
"gguf",
"base_model:Qwen/Qwen3.5-27B",
"base_model:quantized:Qwen/Qwen3.5-27B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-02-27T16:47:11Z | This is an experimental 4-bit quantization of the dense [Qwen3.5-27B](https://huggingface.co/Qwen/Qwen3.5-27B), using the [unsloth imatrix data](https://huggingface.co/unsloth/Qwen3.5-27B-GGUF/blob/main/imatrix_unsloth.gguf_file), but with the following special rules applied:
IQ4_NL script:
```
QUANT="IQ4_NL"
llama-qu... | [] |
yeeees/nyu_franka_pi0 | yeeees | 2025-11-24T08:44:06Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"pi0",
"robotics",
"dataset:lerobot/nyu_franka_play_dataset",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-24T08:42:38Z | # Model Card for pi0
<!-- Provide a quick summary of what the model is/does. -->
**π₀ (Pi0)**
π₀ is a Vision-Language-Action model for general robot control, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀ represents a breakthrough ... | [] |
facebook/ActionMesh | facebook | 2026-01-24T02:49:27Z | 118 | 34 | null | [
"safetensors",
"custom",
"video-to-4D",
"image-to-3d",
"en",
"arxiv:2601.16148",
"license:other",
"region:us"
] | image-to-3d | 2026-01-13T15:19:27Z | # ActionMesh: Animated 3D Mesh Generation with Temporal 3D Diffusion
[**ActionMesh**](https://remysabathier.github.io/actionmesh/) is a generative model that predicts production-ready 3D meshes "in action" in a feed-forward manner. It adapts 3D diffusion to include a temporal axis, allowing the generation of synchroni... | [] |
jake-snake/ppo-Pyramids | jake-snake | 2025-10-15T23:21:08Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2025-10-15T23:20:35Z | # **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/... | [] |
Eimhin03/MCV_Fleurs_Combined_Irish_ASR_No_Aug | Eimhin03 | 2026-04-08T15:35:13Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2026-04-08T13:55:45Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MCV_Fleurs_Combined_Irish_ASR_No_Aug
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/wh... | [] |
mishrabp/phi2-custom-response-qlora-adapter | mishrabp | 2025-12-16T14:13:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"causal-lm",
"instruction-following",
"loRA",
"QLoRA",
"sentiment-analysi",
"quantized",
"en",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-11-29T14:59:59Z | # Phi-2 QLoRA Fine-Tuned Model
**Model:** `mishrabp/phi2-custom-response-qlora-adapter`
**Base Model:** [`microsoft/phi-2`](https://huggingface.co/microsoft/phi-2)
**Fine-Tuning Method:** QLoRA (4-bit quantized LoRA)
**Task:** Instruction-following / Customer Support Responses
---
## Model Description
This repo... | [
{
"start": 192,
"end": 197,
"text": "QLoRA",
"label": "training method",
"score": 0.809413731098175
},
{
"start": 378,
"end": 383,
"text": "QLoRA",
"label": "training method",
"score": 0.8244431614875793
}
] |
EvilScript/activation-oracle-gemma-4-26B-A4B-it-step-40000 | EvilScript | 2026-04-14T13:46:36Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma4",
"activation-oracles",
"interpretability",
"lora",
"self-introspection",
"sae",
"arxiv:2512.15674",
"base_model:google/gemma-4-26B-A4B-it",
"base_model:adapter:google/gemma-4-26B-A4B-it",
"license:apache-2.0",
"region:us"
] | null | 2026-04-14T13:46:22Z | # Activation Oracle: gemma-4-26B-A4B-it
This is a **LoRA adapter** that turns [gemma-4-26B-A4B-it](https://huggingface.co/google/gemma-4-26B-A4B-it)
into an **activation oracle** -- an LLM that can read and interpret the internal
activations of other LLMs (or itself) in natural language.
## What is an activation orac... | [] |
SELEE/qwen3-4b-agent-v3 | SELEE | 2026-03-01T13:31:36Z | 482 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"agent",
"tool-use",
"alfworld",
"dbbench",
"conversational",
"en",
"dataset:u-10bei/sft_alfworld_trajectory_dataset_v5",
"dataset:u-10bei/sft_alfworld_trajectory_dataset_v4",
"dataset:u-10bei/dbbench_sft_dataset_react_v4",
"datase... | text-generation | 2026-02-22T12:39:53Z | # qwen3-4b-agent-full-v3
This repository provides a **fully fine-tuned model** based on
**Qwen/Qwen3-4B-Instruct-2507**.
Because this model underwent full parameter fine-tuning, this repository contains the **full model weights**.
You can load it directly without needing to merge it with the base model.
## Training ... | [
{
"start": 405,
"end": 413,
"text": "ALFWorld",
"label": "training method",
"score": 0.716928243637085
},
{
"start": 436,
"end": 443,
"text": "DBBench",
"label": "training method",
"score": 0.8041415214538574
}
] |
mradermacher/RLPR-Llama3.1-8B-Inst-i1-GGUF | mradermacher | 2025-12-31T21:09:16Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"en",
"dataset:openbmb/RLPR-train",
"base_model:openbmb/RLPR-Llama3.1-8B-Inst",
"base_model:quantized:openbmb/RLPR-Llama3.1-8B-Inst",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-09T06:32:35Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
kazu1215/qwen3-lora-v11 | kazu1215 | 2026-02-26T09:36:14Z | 7 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-26T09:35:55Z | qwen3-lora-v11
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **structured output ... | [
{
"start": 116,
"end": 121,
"text": "QLoRA",
"label": "training method",
"score": 0.7804960012435913
}
] |
caiyuchen/Spiral-step-11 | caiyuchen | 2025-11-15T11:37:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"math",
"rl",
"conversational",
"en",
"arxiv:2506.24119",
"arxiv:2510.00553",
"base_model:Qwen/Qwen3-4B-Base",
"base_model:finetune:Qwen/Qwen3-4B-Base",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
... | text-generation | 2025-11-15T11:09:37Z | ---
license: apache-2.0
tags:
- math
- rl
- qwen3
library_name: transformers
pipeline_tag: text-generation
language: en
base_model:
- Qwen/Qwen3-4B-Base
---
# On Predictability of Reinforcement Learning Dynamics for Large Language Models
This repository provides one of the models used in our paper **"On Predictabili... | [] |
qrk-labs/akeel-cot-qwen3-4B-3k-v2b | qrk-labs | 2026-03-02T20:22:52Z | 62 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-02T10:55:56Z | # Model Card for akeel-cot-4b-v2b
This model is a fine-tuned version of [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to t... | [] |
icewaterdun/Qwen2.5-72B-Instruct-PyQGIS | icewaterdun | 2025-10-10T04:25:57Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qgis",
"pyqgis",
"geospatial",
"gis",
"code-generation",
"fine-tuned",
"lora",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-72B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-72B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-10-09T19:14:14Z | # Qwen2.5-72B-Instruct-PyQGIS (LoRA)
**Repository:** `icewaterdun/Qwen2.5-72B-Instruct-PyQGIS`
**Base model:** Qwen2.5-72B-Instruct
**Adapter type:** LoRA (PEFT)
**Checkpoint used:** `checkpoint-600` (early-stop selection)
## Model Summary
This repository contains a LoRA adapter that tailors **Qwen2.5-72B-Instruct**... | [] |
Bam3752/basilisk-el-ce-biomedbert-ab-v1 | Bam3752 | 2026-02-19T14:19:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"biomedical",
"entity-linking",
"reranking",
"umls",
"basilisk",
"en",
"base_model:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"base_model:finetune:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
... | text-classification | 2026-02-19T14:14:39Z | # BASILISK EL Cross-Encoder (BiomedBERT AB) v1
This model is a biomedical entity-linking (EL) cross-encoder used by BASILISK to rerank UMLS concept candidates for a mention in context.
It is fine-tuned from:
- `microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext`
- Base revision: `e1354b7a3a09615f6aba48dfad... | [] |
nicoboss/Qwen3-32B-Uncensored | nicoboss | 2025-05-02T22:08:56Z | 0 | 11 | peft | [
"peft",
"safetensors",
"qwen3",
"generated_from_trainer",
"dataset:Guilherme34/uncensor",
"base_model:Qwen/Qwen3-32B",
"base_model:adapter:Qwen/Qwen3-32B",
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T22:00:07Z | This is a finetune of Qwen3-32B to make it uncensored.
Big thanks to [@Guilherme34](https://huggingface.co/Guilherme34) for creating the [uncensor](https://huggingface.co/datasets/Guilherme34/uncensor) dataset used for this uncensored finetune.
This model is based on Qwen3-32B and is governed by the [Apache License 2... | [] |
DexopT/gemma-3-1b-it-heretic-extreme-uncensored-abliterated-MNN | DexopT | 2026-01-16T19:06:39Z | 19 | 0 | mnn | [
"mnn",
"on-device",
"android",
"ios",
"quantization",
"int4",
"text-generation",
"gemma",
"en",
"base_model:DexopT/gemma-3-1b-it-heretic-extreme-uncensored-abliterated-MNN",
"base_model:finetune:DexopT/gemma-3-1b-it-heretic-extreme-uncensored-abliterated-MNN",
"license:other",
"region:us"
] | text-generation | 2026-01-16T19:06:32Z | # Gemma-3-1B (MNN Quantized)
This is a **4-bit quantized** version of the Gemma-3-1B model, optimized for **on-device inference** (Android/iOS) using the [Alibaba MNN framework](https://github.com/alibaba/MNN).
## 🚀 Fast Deployment on Android
### 1. Download the App
Don't build from scratch! Use the official MNN Ch... | [] |
LeeAeron/Ace-Step1.5 | LeeAeron | 2026-02-15T22:31:34Z | 47 | 0 | transformers | [
"transformers",
"diffusers",
"safetensors",
"acestep",
"feature-extraction",
"audio",
"music",
"text2music",
"text-to-audio",
"custom_code",
"arxiv:2602.00744",
"license:mit",
"region:us"
] | text-to-audio | 2026-02-15T21:04:59Z | <h1 align="center">ACE-Step 1.5</h1>
<h1 align="center">Pushing the Boundaries of Open-Source Music Generation</h1>
<p align="center">
<a href="https://ace-step.github.io/ace-step-v1.5.github.io/">Project</a> |
<a href="https://huggingface.co/collections/ACE-Step/ace-step-15">Hugging Face</a> |
<a href="htt... | [] |
dbw6/Llama-2-7b-AQLM-4Bit-4x8-hf | dbw6 | 2026-04-05T03:07:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"aqlm",
"quantized",
"llama-2",
"en",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:quantized:meta-llama/Llama-2-7b-hf",
"license:llama2",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] | text-generation | 2026-04-05T03:00:06Z | # dbw6/Llama-2-7b-AQLM-4Bit-4x8-hf
This repository contains a Hugging Face export of `Llama-2-7b-hf` quantized with AQLM using the `4-bit` `4x8` scheme.
## Base model
- `meta-llama/Llama-2-7b-hf`
## Quantization
- Method: `AQLM`
- Scheme: `4x8` (4 codebooks, 8 bits per codebook)
- Effective label: `4-bit`
- In-gro... | [] |
Aaayushiii/mt5-crop-lora | Aaayushiii | 2025-08-18T10:50:47Z | 0 | 0 | null | [
"safetensors",
"mt5",
"text2text-generation",
"crop-recommendation",
"agriculture",
"lora",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-08-18T10:50:33Z | # 🌱 mT5 Crop Recommendation (LoRA Fine-tuned)
This is a fine-tuned [mT5](https://huggingface.co/google/mt5-base) model using **LoRA adapters** for crop recommendation tasks.
It takes weather and environmental inputs and suggests the most suitable crop(s) along with profitability insights.
## 🧑🏫 Model Details
- ... | [] |
iproskurina/gemma-2-9b-gptqmodel-4bit | iproskurina | 2025-10-31T14:47:17Z | 48 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"gptq",
"4-bit",
"en",
"dataset:allenai/c4",
"license:gemma",
"text-generation-inference",
"region:us"
] | text-generation | 2025-10-31T11:10:47Z | # gemma-2-9b - GPTQ (4-bit)
Source model: [google/gemma-2-9b](https://huggingface.co/google/gemma-2-9b)
This model was quantized to 4-bit using [GPTQModel](https://github.com/ModelCloud/GPTQModel).
Quantization parameters:
- bits: 4
- group_size: 128
- damp_percent: 0.05
- desc_act: False
### Usage
```
... | [] |
AXERA-TECH/SuperResolution | AXERA-TECH | 2026-04-02T06:19:48Z | 41 | 1 | null | [
"internvl_chat",
"image-to-image",
"custom_code",
"region:us"
] | image-to-image | 2025-09-02T09:47:29Z | # SuperResolution
This version of SuperResolution has been converted to run on the Axera NPU using **w8a8** quantization.
This model has been optimized with the following LoRA:
Compatible with Pulsar2 version: 4.2
## Convert tools links:
For those who are interested in model conversion, you can try to export axmo... | [] |
flexitok/unigram_ita_Latn_32000 | flexitok | 2026-02-23T03:24:20Z | 0 | 0 | null | [
"tokenizer",
"unigram",
"flexitok",
"fineweb2",
"ita",
"license:mit",
"region:us"
] | null | 2026-02-23T03:20:20Z | # UnigramLM Tokenizer: ita_Latn (32K)
A **UnigramLM** tokenizer trained on **ita_Latn** data from Fineweb-2-HQ.
## Training Details
| Parameter | Value |
|-----------|-------|
| Algorithm | UnigramLM |
| Language | `ita_Latn` |
| Target Vocab Size | 32,000 |
| Final Vocab Size | 0 |
| Pre-tokenizer | ByteLevel |
| N... | [] |
JaxNN/resnet152s.gluon_in1k | JaxNN | 2026-04-14T21:01:45Z | 0 | 0 | jaxnn | [
"jaxnn",
"image-classification",
"jax",
"arxiv:1512.03385",
"arxiv:1812.01187",
"license:apache-2.0",
"region:us"
] | image-classification | 2026-04-14T21:01:11Z | # Model card for resnet152s.gluon_in1k
A ResNet-S image classification model.
This model features:
* ReLU activations
* 3-layer stem of 3x3 convolutions with extra-width and pooling
* 1x1 convolution shortcut downsample
Trained on ImageNet-1k in Apache Gluon using Bag-of-Tricks based recipes.
## Model Details
-... | [] |
Muapi/flux-detailer-mysticfantasy-style | Muapi | 2025-08-14T10:31:53Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-14T10:31:27Z | # [Flux Detailer] MysticFantasy Style

**Base model**: Flux.1 D
**Trained words**:
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"C... | [] |
flexitok/bpe_ltr_nld_Latn_4000_v2 | flexitok | 2026-04-15T06:45:49Z | 0 | 0 | null | [
"tokenizer",
"bpe",
"flexitok",
"fineweb2",
"nld",
"license:mit",
"region:us"
] | null | 2026-04-14T22:14:00Z | # Byte-Level BPE Tokenizer: nld_Latn (4K)
A **Byte-Level BPE** tokenizer trained on **nld_Latn** data from Fineweb-2-HQ.
## Training Details
| Parameter | Value |
|-----------|-------|
| Algorithm | Byte-Level BPE |
| Language | `nld_Latn` |
| Target Vocab Size | 4,000 |
| Final Vocab Size | 5,056 |
| Pre-tokenizer ... | [] |
sebobo/pickplacePolicy | sebobo | 2026-03-25T20:59:32Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:sebobo/pickplace",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-25T20:59:15Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
HUNGTZE/T1plus | HUNGTZE | 2025-12-30T15:46:04Z | 0 | 0 | peft | [
"peft",
"safetensors",
"lora",
"transformers",
"glm4",
"vision-language-model",
"text-generation",
"conversational",
"region:us"
] | text-generation | 2025-12-30T15:11:13Z | # GLM-4.6V SFT LoRA (T1plus)
Fine-tuned LoRA adapter for GLM-4.6V 108B MoE Vision-Language Model.
## Model Details
- **Base Model**: GLM-4.6V 108B MoE (128 experts, 8 active)
- **Training Method**: SFT with LoRA
- **LoRA Rank**: 64
- **LoRA Alpha**: 128
- **Training Epochs**: 2
- **Learning Rate**: 2e-05
- **Max Seq... | [
{
"start": 201,
"end": 214,
"text": "SFT with LoRA",
"label": "training method",
"score": 0.7073301076889038
}
] |
deshanksuman/finetunedQwen3-4B-Instruct-WSD-Advanced-reasoning | deshanksuman | 2025-08-06T07:42:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"trl",
"qwen",
"wsd",
"ambiguity",
"text-classification",
"en",
"dataset:deshanksuman/Reasoning_WSD_dataset",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region... | text-classification | 2025-08-05T22:13:07Z | # Uploaded model
- **Developed by:** deshanksuman
- **License:** apache-2.0
- **Finetuned from model :** Qwen/Qwen3-4B
# Dataset
Fews Training data arranged in the format of Instruction, Input and output with advanced Reasonining for sense identification.
The data generation has been semi automated using the Arcee... | [] |
llm-jp/optimal-sparsity-math-d1024-E8-k2-1.1B-A470M | llm-jp | 2026-02-19T16:38:04Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"reasoning",
"arxiv:2508.18672",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-19T17:37:35Z | # Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks
This repository contains model checkpoints from the paper [Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks](https://huggingface.co/papers/2508.18672).
For more details, including code and evaluation procedures, ple... | [] |
kiarashQ/fa-ir-tts-piper-ar-mantatts-v1 | kiarashQ | 2025-11-17T06:59:10Z | 0 | 1 | null | [
"onnx",
"tts",
"piper",
"persian",
"fa-ir",
"manta-tts",
"neural-tts",
"single-speaker",
"fa",
"dataset:kiarashQ/farsi-asr-unified-cleaned",
"base_model:rhasspy/piper-voices",
"base_model:quantized:rhasspy/piper-voices",
"license:apache-2.0",
"region:us"
] | null | 2025-11-16T10:33:55Z | # 🇮🇷 Persian TTS — Piper EN Base → ManaTTS (v1)
**Model name:** `fa-ir-tts-piper-en-mantatts-v1`
**Previous name:** `kiarashQ/fa_IR-mantatts`
**Sampling rate:** 22,050 Hz
**Base checkpoint:**
`ar/ar_JO/kareem/medium/epoch=5079-step=1682020.ckpt` (Piper AR, medium)
This is a Persian (fa-IR) single-speaker TT... | [] |
Mohammed-Hamza/news-analyzer | Mohammed-Hamza | 2025-11-20T20:01:13Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"llama-factory",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-11-20T19:58:17Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# news-analyzer
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruc... | [] |
kainah/Gemma-3-27b-it-Uncensored-HERETIC-Gemini-Deep-Reasoning-Q4_K_M-GGUF | kainah | 2026-01-27T00:03:48Z | 95 | 1 | transformers | [
"transformers",
"gguf",
"uncensored",
"heretic",
"abliterated",
"unsloth",
"finetune",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"en",
"dataset:TeichAI/gemini-3-pro-preview-high-reasoning-250x",
"base_model:DavidAU/Gemma-3-27b-it-Uncensored-HERETIC-Gemini-Deep-Reasoning",
"base_mo... | image-text-to-text | 2026-01-27T00:02:34Z | # kainah/Gemma-3-27b-it-Uncensored-HERETIC-Gemini-Deep-Reasoning-Q4_K_M-GGUF
This model was converted to GGUF format from [`DavidAU/Gemma-3-27b-it-Uncensored-HERETIC-Gemini-Deep-Reasoning`](https://huggingface.co/DavidAU/Gemma-3-27b-it-Uncensored-HERETIC-Gemini-Deep-Reasoning) using llama.cpp via the ggml.ai's [GGUF-my... | [] |
PatrikGajdos/Slovak_GPTJ162_SK | PatrikGajdos | 2026-01-01T22:38:33Z | 0 | 0 | null | [
"safetensors",
"gptj",
"region:us"
] | null | 2026-01-01T22:37:18Z | # Grooming Detection – Slovak GPT-J 162M (SK)
Tento repozitár obsahuje jemne doladený jazykový model Slovak GPT-J 162M
určený na binárnu klasifikáciu slovenských textových konverzácií.
Model je zameraný na detekciu groomingového a rizikového správania.
Model bol použitý ako jazykovo špecifická alternatíva ku
m... | [] |
mradermacher/majuli3.1-i1-GGUF | mradermacher | 2026-05-01T11:35:14Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"safetensors",
"gemma3",
"image-text-to-text",
"creative",
"roleplay",
"conversational",
"text-generation-inference",
"endpoints_compatible",
"en",
"ru",
"base_model:tripplet-research/majuli3.1",
"base_model:quantized:tripplet-research/majuli3.1",
"license:apache-... | image-text-to-text | 2026-05-01T09:57:41Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
wuc1/sarm_single_0413 | wuc1 | 2026-04-13T04:18:58Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"sarm",
"dataset:wuc1/bi_so101_flatten-and-fold-the-rag-0331-subtask",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-12T19:25:33Z | # Model Card for sarm
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.c... | [] |
Co-Creator/TimCampbell-replicateDemo | Co-Creator | 2025-10-09T23:49:37Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-10-09T21:03:44Z | # Timcampbell Replicatedemo
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flu... | [] |
ldsjmdy/Qwen3-235B-A22B-Thinking-2507-FreeLM-LoRA | ldsjmdy | 2026-02-10T03:12:31Z | 0 | 1 | null | [
"dataset:ldsjmdy/FreeLM",
"arxiv:2602.08030",
"base_model:Qwen/Qwen3-235B-A22B-Thinking-2507",
"base_model:finetune:Qwen/Qwen3-235B-A22B-Thinking-2507",
"region:us"
] | null | 2026-02-08T13:46:49Z | <div align="center">
<h1>Qwen3-235B-A22B-Thinking-2507-FreeLM-LoRA</h1>
</div>
<div align="center">
[](https://arxiv.org/abs/your_paper_link)
[](https:... | [] |
Tn1072/my_awesome_video_cls_model | Tn1072 | 2025-08-20T07:20:08Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2025-08-20T07:19:51Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_video_cls_model
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-... | [] |
tung-13e/ft-speech-t5-on-voxpopuli | tung-13e | 2025-10-10T08:52:14Z | 0 | 0 | null | [
"safetensors",
"speecht5",
"generated_from_trainer",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"region:us"
] | null | 2025-10-10T08:52:00Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ft-speech-t5-on-voxpopuli
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht... | [] |
Viper-AI-Vaunt/LTX-2.3-DEV-GGUF | Viper-AI-Vaunt | 2026-03-16T22:54:53Z | 531 | 1 | null | [
"gguf",
"ltx-video",
"comfyui",
"text-to-video",
"image-to-video",
"base_model:Lightricks/LTX-2.3",
"base_model:quantized:Lightricks/LTX-2.3",
"license:other",
"region:us"
] | image-to-video | 2026-03-07T17:18:02Z | # LTX-2.3 DEV GGUF
Private staging repo for `Viper-AI-Vaunt`.
## Files
| File | Size (bytes) | Notes |
| --- | ---: | --- |
| `Viper-ltx-2.3-22b-dev-Q3_K_M.gguf` | `10627957088` | Q3 stretch profile for 8GB-class testing |
| `ltx-2.3-22b-dev-Q4_K_S.gguf` | `12960026976` | Lowest VRAM target of this set |
| `ltx-2.3-... | [] |
neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_3_prover1_ | neural-interactive-proofs | 2025-08-18T15:29:11Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T15:28:18Z | # Model Card for finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_1_iter_3_prover1_
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
``... | [] |
contemmcm/696130c15267abfc38bae1c80f454fe9 | contemmcm | 2025-10-13T16:53:43Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:albert/albert-xxlarge-v1",
"base_model:finetune:albert/albert-xxlarge-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-10-13T16:38:47Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 696130c15267abfc38bae1c80f454fe9
This model is a fine-tuned version of [albert/albert-xxlarge-v1](https://huggingface.co/albert/a... | [] |
MElHuseyni/OLMoASR-tiny | MElHuseyni | 2025-11-09T19:57:53Z | 2 | 0 | null | [
"safetensors",
"whisper",
"audio",
"automatic-speech-recognition",
"speech-recognition",
"OLMoASR",
"base_model:allenai/OLMoASR",
"base_model:finetune:allenai/OLMoASR",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | 2025-11-09T19:50:24Z | # OLMoASR-tiny
This is the **tiny** variant of OLMoASR, extracted from the original [allenai/OLMoASR](https://huggingface.co/allenai/OLMoASR) repository.
## Model Details
- **Model Size:** Tiny
- **Language:** English
- **License:** Apache 2.0
- **Task:** Automatic Speech Recognition (ASR)
## Files Included
- `OLM... | [] |
Theros/Q2.5-ColdBrew-15B-Oxford-test0-Q4_K_M-GGUF | Theros | 2025-09-21T05:27:47Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:SvalTek/Q2.5-ColdBrew-15B-Oxford-test0",
"base_model:quantized:SvalTek/Q2.5-ColdBrew-15B-Oxford-test0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-21T05:27:09Z | # Theros/Q2.5-ColdBrew-15B-Oxford-test0-Q4_K_M-GGUF
This model was converted to GGUF format from [`SvalTek/Q2.5-ColdBrew-15B-Oxford-test0`](https://huggingface.co/SvalTek/Q2.5-ColdBrew-15B-Oxford-test0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to... | [] |
UnifiedHorusRA/Choking_Sex_Sex_With_Choking_Choke | UnifiedHorusRA | 2025-09-20T07:12:27Z | 0 | 0 | null | [
"region:us"
] | null | 2025-09-20T06:55:14Z | <!-- CIVITAI_MODEL_ID: 1951345 -->
<!-- TITLE_BLOCK_START -->
# Choking Sex, Sex With Choking, Choke
**Creator**: [Amoral2](https://civitai.com/user/Amoral2)
**Civitai Model Page**: [https://civitai.com/models/1951345](https://civitai.com/models/1951345)
<!-- TITLE_BLOCK_END -->
<!-- VERSIONS_TABLE_START -->
## Vers... | [] |
AmicsAi/AmicsAi3-9B_Steps-GGUF | AmicsAi | 2026-04-13T04:55:12Z | 0 | 0 | null | [
"gguf",
"qwen3_5",
"llama.cpp",
"unsloth",
"vision-language-model",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-13T04:53:45Z | # AmicsAi3-9B_Steps-GGUF : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `llama-cli -hf AmicsAi/AmicsAi3-9B_Steps-GGUF --jinja`
- For multimodal models: `llama-mtmd-cli -hf AmicsAi/AmicsAi3-9B_Steps-GGUF --... | [
{
"start": 94,
"end": 101,
"text": "Unsloth",
"label": "training method",
"score": 0.7860787510871887
},
{
"start": 132,
"end": 139,
"text": "unsloth",
"label": "training method",
"score": 0.831994354724884
},
{
"start": 446,
"end": 453,
"text": "Unsloth",... |
Siqi-Hu/Llama2-7B-lora-r-32-generic-step-1050-lr-1e-5-labels_40.0-optimized | Siqi-Hu | 2025-08-06T11:27:13Z | 2 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-08-06T10:21:46Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama2-7B-lora-r-32-generic-step-1200-lr-1e-5-labels_40.0-3
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](http... | [] |
hsuresh/vqvns | hsuresh | 2025-11-05T17:32:32Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-11-05T17:25:33Z | # Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
These are the model weights for VQVNS.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the official weights repository for the project:
A Deep Representation Learning Mode... | [] |
Z-Jafari/bert-base-multilingual-cased-finetuned-PersianQuAD-wiki_ds_Scored-all-rows | Z-Jafari | 2025-12-23T21:06:52Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2025-12-23T20:51:07Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-PersianQuAD-wiki_ds_Scored-all-rows
This model is a fine-tuned version of [google-bert/ber... | [] |
rbelanec/train_cb_1757340264 | rbelanec | 2025-09-10T16:06:10Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"prefix-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-10T16:01:48Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_cb_1757340264
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama... | [] |
nazdef/gemma-3-1b-it-ghigliottina-grpo-merged-ckpt1880 | nazdef | 2026-03-05T14:08:05Z | 155 | 0 | null | [
"safetensors",
"gemma3_text",
"grpo",
"italian",
"ghigliottina",
"reasoning",
"it",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"license:gemma",
"region:us"
] | null | 2026-03-05T14:05:27Z | # Gemma 3 1B IT — Ghigliottina GRPO (merged ckpt-1880)
Merged model from GRPO checkpoint **1880**.
- Base model: `google/gemma-3-1b-it`
- Adapter checkpoint: `outputs/gemma-3-1b-grpo-train-v2-3ep/checkpoint-1880`
- Merge: `peft.PeftModel.merge_and_unload()`
## Eval holdout (bullets)
Config: `config_tmp/config.train... | [] |
villee/dqn-SpaceInvadersNoFrameskip-v4 | villee | 2025-08-24T20:49:59Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-08-24T20:10:58Z | # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework... | [] |
turbo-maikol/rl-course-unit5-pyramids | turbo-maikol | 2025-08-17T09:46:03Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2025-08-16T20:52:49Z | # **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/... | [] |
jesjoah/medllm-pubmedqa-qlora | jesjoah | 2026-04-14T05:12:15Z | 0 | 0 | peft | [
"peft",
"safetensors",
"medical",
"qlora",
"llama",
"fine-tuned",
"en",
"dataset:qiaojin/PubMedQA",
"dataset:openlifescienceai/medmcqa",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-3B-Instruct",
"region:us"
] | null | 2026-04-14T04:52:23Z | # MedLLM — Medical QA via QLoRA Fine-tuning
Fine-tuned LLaMA 3.2 3B Instruct on PubMedQA + MedMCQA
using QLoRA (4-bit quantization) for domain-specific
medical question answering.
## Model Details
- **Base model:** LLaMA 3.2 3B Instruct
- **Technique:** QLoRA (4-bit NF4 quantization)
- **LoRA rank:** r=16, alpha=32... | [] |
fpadovani/candor_word_42 | fpadovani | 2025-10-18T20:03:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-10-18T19:41:41Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# candor_word_42
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following... | [] |
furiosa-ai/DeepSeek-R1-Distill-Qwen-32B | furiosa-ai | 2025-08-27T04:52:25Z | 4 | 0 | furiosa-llm | [
"furiosa-llm",
"qwen2",
"furiosa-ai",
"text-generation",
"conversational",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"license:mit",
"region:us"
] | text-generation | 2025-08-27T04:51:32Z | # Model Overview
- **Model Architecture:** Qwen2
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Context Length:** 32k tokens
- Maximum Prompt Length: 32768 tokens
- Maximum Generation Length: 32768 tokens
- **Intended Use Cases:** Intended for commercial and non-commercial use. Same as [dee... | [] |
smorand/hf-ibm-granite-speech | smorand | 2026-01-11T02:24:07Z | 0 | 0 | null | [
"endpoints_compatible",
"region:us"
] | null | 2026-01-11T01:10:41Z | # IBM Granite Speech - Hugging Face Inference Endpoint
Custom handler for deploying IBM Granite Speech 3.3 8B as a speech-to-text API on Hugging Face Inference Endpoints.
## Features
- Speech-to-text transcription using IBM Granite Speech 3.3 8B
- Supports multiple audio formats (WAV, MP3, FLAC, etc.)
- Automatic au... | [] |
Zlovoblachko/dim3_hyp_setfit_model | Zlovoblachko | 2025-08-17T20:25:11Z | 0 | 0 | setfit | [
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-08-17T20:25:05Z | # SetFit
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient f... | [
{
"start": 2,
"end": 8,
"text": "SetFit",
"label": "training method",
"score": 0.8221794962882996
},
{
"start": 21,
"end": 27,
"text": "SetFit",
"label": "training method",
"score": 0.84630286693573
},
{
"start": 60,
"end": 66,
"text": "setfit",
"label... |
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.01-mnt64-0922195521-epoch-10 | vectorzhou | 2025-09-23T18:14:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"fine-tuned",
"trl",
"extra-gradient",
"conversational",
"dataset:PKU-Alignment/PKU-SafeRLHF",
"arxiv:2503.08942",
"base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"base_model:finetune:vectorzhou/g... | text-generation | 2025-09-23T18:14:09Z | # Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.01-mnt64
This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-Sa... | [] |
Helsinki-NLP/opus-mt-tc-big-ces_slk-en | Helsinki-NLP | 2023-08-16T12:10:55Z | 27 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"opus-mt-tc",
"cs",
"en",
"sk",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"deploy:azure",
"region:us"
] | translation | 2022-04-13T15:42:34Z | # opus-mt-tc-big-ces_slk-en
Neural machine translation model for translating from Czech and Slovak (ces+slk) to English (en).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in ... | [] |
Isotr0py/DeepSeek-V3-0324-tiny | Isotr0py | 2025-09-06T11:45:01Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"deepseek_v3",
"text-generation",
"conversational",
"custom_code",
"arxiv:2412.19437",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"fp8",
"region:us"
] | text-generation | 2025-09-06T11:43:28Z | # DeepSeek-V3-0324
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="c... | [] |
gopinathbalu/Llama3-Med42-8B-4bit | gopinathbalu | 2025-08-22T08:24:36Z | 0 | 0 | null | [
"safetensors",
"llama",
"bnb-my-repo",
"m42",
"health",
"healthcare",
"clinical-llm",
"text-generation",
"conversational",
"en",
"arxiv:2408.06142",
"base_model:m42-health/Llama3-Med42-8B",
"base_model:quantized:m42-health/Llama3-Med42-8B",
"license:llama3",
"4-bit",
"bitsandbytes",
... | text-generation | 2025-08-22T08:24:20Z | # m42-health/Llama3-Med42-8B (Quantized)
## Description
This model is a quantized version of the original model [`m42-health/Llama3-Med42-8B`](https://huggingface.co/m42-health/Llama3-Med42-8B).
It's quantized using the BitsAndBytes library to 4-bit using the [bnb-my-repo](https://huggingface.co/spaces/bnb-community... | [] |
mradermacher/lawyer-llama-13b-v2-i1-GGUF | mradermacher | 2026-05-03T10:27:29Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"zh",
"base_model:pkupie/lawyer-llama-13b-v2",
"base_model:quantized:pkupie/lawyer-llama-13b-v2",
"license:llama2",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2026-05-03T04:44:13Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
Deeps03/qwen2-1.5b-log-classifier | Deeps03 | 2025-09-14T13:21:51Z | 49 | 3 | null | [
"safetensors",
"gguf",
"qwen2",
"text-classification",
"text-generation",
"log-analysis",
"qwen",
"en",
"base_model:Qwen/Qwen2-1.5B",
"base_model:quantized:Qwen/Qwen2-1.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-classification | 2025-09-05T06:26:47Z | # Model Card
This model is a fine-tuned version of Qwen/Qwen2-1.5B designed for log classification. It takes system or application log entries as input and categorizes them into one of five labels: Normal, Suspicious, Malicious, Informational, or Error. This helps in automating the process of monitoring and analyzing ... | [] |
mradermacher/funny-nemo-embedding-merged-GGUF | mradermacher | 2025-08-31T04:49:31Z | 7 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Burnt-Toast/funny-nemo-embedding-merged",
"base_model:quantized:Burnt-Toast/funny-nemo-embedding-merged",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-31T01:12:15Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
arianaazarbal/qwen3-4b-20260109_161845_lc_rh_sot_recon_gen_dont_ex-da315c-step120 | arianaazarbal | 2026-01-09T18:48:57Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-01-09T18:48:24Z | # qwen3-4b-20260109_161845_lc_rh_sot_recon_gen_dont_ex-da315c-step120
## Experiment Info
- **Full Experiment Name**: `20260109_161845_leetcode_train_medhard_filtered_rh_simple_overwrite_tests_recontextualization_gen_dont_exploit_loophole_train_dont_exploit_loophole_oldlp_training_seed42`
- **Short Name**: `20260109_16... | [] |
midwestern-simulation-active/smollm3-3b-autoencoding-32tok-test1 | midwestern-simulation-active | 2025-08-29T19:42:48Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2025-08-28T17:55:09Z | trained on 8192x8 seqs up to 1024 tokens, ends at loss of ~0.69.
projector is simple linear layer, `nn.Linear(hidden_size, hidden_size)`
compression during training ranges from 1-64 NL toks / Embed toks, about 20.97 on average
## Samples:
### Stackexchange Question
Original:
> User asked: How to charge a battery w... | [] |
mradermacher/sundae-v716-update-direct-4b-GGUF | mradermacher | 2026-02-24T19:54:20Z | 136 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:czlonkowski/sundae-v716-update-direct-4b",
"base_model:quantized:czlonkowski/sundae-v716-update-direct-4b",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-02-18T17:58:29Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
mradermacher/Bio-mistral-7B-ties-GGUF | mradermacher | 2025-09-03T16:22:34Z | 15 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"ties",
"OpenPipe/mistral-ft-optimized-1218",
"BioMistral/BioMistral-7B",
"en",
"base_model:chedi-10-trabelsi/Bio-mistral-7B-ties",
"base_model:quantized:chedi-10-trabelsi/Bio-mistral-7B-ties",
"license:apache-2.0",
"endpoints_comp... | null | 2025-09-03T14:57:03Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
jahyungu/Qwen2.5-Math-1.5B-Instruct_openbookqa | jahyungu | 2025-08-21T14:44:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-Math-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Math-1.5B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-21T13:07:33Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2.5-Math-1.5B-Instruct_openbookqa
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-1.5B-Instruct](https://huggingface... | [] |
Pieces/embeddinggemma-300m-distilled-depth33pct-40-768dim-step75000 | Pieces | 2025-12-20T14:48:50Z | 0 | 0 | null | [
"safetensors",
"gemma3_text",
"region:us"
] | null | 2025-12-20T14:37:49Z | # Distilled Backbone: embeddinggemma-300m-distilled-768dim
This is a distilled/compressed version of google/embeddinggemma-300m.
## Compression Details
- Base model: google/embeddinggemma-300m
- Width reduction factor: None
- Target hidden size: None
- Final embedding dimension: 768
- Had projection layer: False
- Pr... | [] |
unsloth/Ministral-3-14B-Reasoning-2512-unsloth-bnb-4bit | unsloth | 2025-12-06T08:28:49Z | 875 | 1 | vllm | [
"vllm",
"safetensors",
"mistral3",
"mistral-common",
"unsloth",
"en",
"fr",
"es",
"de",
"it",
"pt",
"nl",
"zh",
"ja",
"ko",
"ar",
"base_model:mistralai/Ministral-3-14B-Reasoning-2512",
"base_model:quantized:mistralai/Ministral-3-14B-Reasoning-2512",
"license:apache-2.0",
"4-bit... | null | 2025-12-02T12:20:28Z | <div>
<p style="margin-top: 0;margin-bottom: 0;">
<em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
</p>
<div style="display: flex; gap: 5px; align-items: center; ">
<a href="https://github.com/u... | [] |
rookiezyp/Qwen2.5-1.5B-alpaca-20260226 | rookiezyp | 2026-02-27T09:03:47Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"endpoints_compatible",
"region:us"
] | null | 2026-02-26T11:29:15Z | # Model Card for Qwen2.5-1.5B-alpaca-20260226
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, bu... | [] |
mradermacher/OPD-sycophancy-gpt-oss-20b-GGUF | mradermacher | 2026-04-04T09:25:20Z | 1,321 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:kai-xu/OPD-sycophancy-gpt-oss-20b",
"base_model:quantized:kai-xu/OPD-sycophancy-gpt-oss-20b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-03T05:15:48Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: MXFP4_MOE x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->... | [] |
edgarkim/act_edgar_a100_0113 | edgarkim | 2026-01-14T06:46:17Z | 2 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:edgarkim/so101_test_0113",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-14T06:46:06Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.8059530854225159
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8365488052368164
},
{
"start": 883,
"end": 886,
"text": "act",
"label"... |
ikchain/vet-dermatology-feline | ikchain | 2026-04-05T18:51:53Z | 0 | 0 | null | [
"veterinary",
"dermatology",
"pytorch",
"efficientnet",
"image-classification",
"gemma-4-good-hackathon",
"license:apache-2.0",
"region:us"
] | image-classification | 2026-04-05T18:50:00Z | # Feline Dermatology Classifier — Howl Vision
EfficientNetV2-S for feline skin lesion classification (4 classes).
Part of [Howl Vision](https://github.com/ikchain) for the Gemma 4 Good Hackathon.
## Metrics (held-out test, n=152)
| Metric | Value |
|--------|-------|
| Accuracy | 90.1% [84.4%, 93.9%] Wilson CI 95% |
... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.