modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
Danyloz/fine_tuned_Qwen2.5-7B-Instruct_CUADv1 | Danyloz | 2026-04-14T19:51:35Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-04-10T15:01:16Z | # Model Card for fine_tuned_Qwen2.5-7B-Instruct_CUADv1
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you... | [] |
shivank21/dpo_deepseek-llm-7b-9455-1800 | shivank21 | 2025-11-09T06:44:17Z | 1 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:shivank21/diag_agent_deepseek-llm-7b-9455",
"dpo",
"lora",
"transformers",
"trl",
"unsloth",
"text-generation",
"arxiv:2305.18290",
"base_model:shivank21/diag_agent_deepseek-llm-7b-9455",
"region:us"
] | text-generation | 2025-11-09T06:43:53Z | # Model Card for dpo_deepseek-llm-7b-9455
This model is a fine-tuned version of [shivank21/diag_agent_deepseek-llm-7b-9455](https://huggingface.co/shivank21/diag_agent_deepseek-llm-7b-9455).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipelin... | [] |
mradermacher/Mistral-7B-Instruct-v0.1-Full-Final-GGUF | mradermacher | 2026-04-04T07:09:24Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"finetuned",
"mistral-common",
"en",
"base_model:kerolos1/Mistral-7B-Instruct-v0.1-Full-Final",
"base_model:quantized:kerolos1/Mistral-7B-Instruct-v0.1-Full-Final",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-04T03:34:53Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
KoukiHagiwara/act_slide_task_02 | KoukiHagiwara | 2026-03-25T14:39:26Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:KoukiHagiwara/slide_the_object_task_02",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-25T14:38:37Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
aslakey/text_overlay_detection | aslakey | 2025-11-10T23:58:00Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"dinov2_with_registers",
"image-classification",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-11-10T23:33:52Z | # Text Overlay Detection
Text overlays are widely used for subtitles, credits, watermarks, promotional messages, and explanatory labels.
There are many use cases for which we may want to detect and/or remove text overlay – avoiding burn-in text when training image and video generation models,
supplying clean content f... | [] |
ArliAI/gpt-oss-120b-Derestricted | ArliAI | 2025-11-29T02:25:12Z | 2,055 | 80 | transformers | [
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"abliterated",
"derestricted",
"gpt-oss-120b",
"openai",
"unlimited",
"uncensored",
"conversational",
"arxiv:2508.10925",
"base_model:openai/gpt-oss-120b",
"base_model:finetune:openai/gpt-oss-120b",
"license:apache-2.0",
"end... | text-generation | 2025-11-28T14:34:55Z | <div align="left">
<img src=https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/iyzgR89q50pp1T8HeeP15.png width="5%"/>
</div>
# Arli AI
# gpt-oss-120b-Derestricted
<div align="center">
<img src=https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/XhCz9N4liIwWEh-yH... | [
{
"start": 1044,
"end": 1084,
"text": "Norm-Preserving Biprojected Abliteration",
"label": "training method",
"score": 0.8549331426620483
},
{
"start": 1152,
"end": 1192,
"text": "Norm-Preserving Biprojected Abliteration",
"label": "training method",
"score": 0.9151321053... |
shikhar7ssu/OpenBEATS-Large-i3-as20k | shikhar7ssu | 2025-11-16T20:01:32Z | 1 | 0 | espnet | [
"espnet",
"tensorboard",
"audio",
"classification",
"dataset:as20k",
"arxiv:2507.14129",
"license:cc-by-4.0",
"region:us"
] | null | 2025-11-16T19:50:58Z | ## ESPnet2 CLS model
### `shikhar7ssu/OpenBEATS-Large-i3-as20k`
This model was trained by Shikhar Bharadwaj using as20k recipe in [espnet](https://github.com/espnet/espnet/).
## CLS config
<details><summary>expand</summary>
```
config: /work/nvme/bbjs/sbharadwaj/espnet/egs2/audioverse/v1/exp/earlarge3/conf/ear_lar... | [] |
amps93/qwen3-tts-finetune-korean-woman-v5-epoch-10 | amps93 | 2026-03-28T08:48:14Z | 0 | 0 | null | [
"safetensors",
"qwen3_tts",
"arxiv:2601.15621",
"license:apache-2.0",
"region:us"
] | null | 2026-03-28T08:47:14Z | # Qwen3-TTS
## Overview
### Introduction
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-TTS-Repo/qwen3_tts_introduction.png" width="90%"/>
<p>
Qwen3-TTS covers 10 major languages (Chinese, English, Japanese, Korean, German, French, Russian, Portuguese, Spanish, and Italian) as... | [] |
rbelanec/train_mrpc_101112_1760638020 | rbelanec | 2025-10-20T03:06:07Z | 6 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"llama-factory",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | text-generation | 2025-10-20T02:10:37Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_mrpc_101112_1760638020
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/m... | [] |
ZETIC-ai/mediapipe-hand-detection | ZETIC-ai | 2026-02-12T06:29:33Z | 0 | 0 | null | [
"on-device",
"mobile",
"android",
"ios",
"melange",
"zetic",
"object-detection",
"en",
"license:other",
"region:us"
] | object-detection | 2026-02-05T12:10:27Z | <!-- ============================================== -->
<!-- FILL THIS IN BEFORE PUBLISHING -->
<!-- ============================================== -->
<!--
🚨 Model License Rule (One Rule Applies to All)
**Never claim or imply commercial usage rights unless the base model license explicitly allows co... | [] |
clemsail/mascarade-iot | clemsail | 2026-03-09T01:03:07Z | 30 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"iot",
"embedded",
"embedded-systems",
"electronics",
"microcontroller",
"mqtt",
"sensor",
"edge-ai",
"fine-tuned",
"lora",
"code-generation",
"mascarade",
"conversational",
"en",
"fr",
"base_model:Qwen/Qwen2.5-Coder-1.... | text-generation | 2026-03-09T00:18:36Z | # Mascarade IoT
Fine-tuned **Qwen2.5-Coder-1.5B-Instruct** model specialized in **IoT** (Internet of Things) for embedded electronics.
Part of the [Mascarade](https://github.com/electron-rare/mascarade) ecosystem — an agentic LLM orchestration system with domain-specific fine-tuned models for embedded systems and ele... | [] |
DCAgent/a1-crosscodeeval_java | DCAgent | 2026-03-23T18:11:37Z | 121 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-23T18:10:26Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft_a1_crosscodeeval_java__Qwen3-8B
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) o... | [] |
GMorgulis/Qwen2.5-7B-Instruct-owl-NORMAL-rank8-8-TEST-ft0.42 | GMorgulis | 2026-02-27T06:30:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-02-27T04:58:59Z | # Model Card for Qwen2.5-7B-Instruct-owl-NORMAL-rank8-8-TEST-ft0.42
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
quest... | [] |
YUNZHICHU/Gemma-4-31B-JANG_4M-CRACK | YUNZHICHU | 2026-04-08T04:24:14Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"gemma4",
"abliterated",
"uncensored",
"crack",
"jang",
"text-generation",
"conversational",
"license:gemma",
"region:us"
] | text-generation | 2026-04-08T04:24:13Z | <p align="center">
<img src="dealign_logo.png" alt="dealign.ai" width="200"/>
</p>
<div align="center">
<img src="dealign_mascot.png" width="128" />
# Gemma 4 31B JANG_4M CRACK
**Abliterated Gemma 4 31B Dense — mixed precision, 18 GB**
93.7% HarmBench compliance with only -2.0% MMLU. Full abliteration of the dens... | [] |
microsoft/Dayhoff-170M-GRS-SS-86000 | microsoft | 2026-04-03T22:15:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"jamba",
"text-generation",
"protein-generation",
"custom_code",
"dataset:microsoft/Dayhoff",
"arxiv:2502.12479",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-03T22:15:01Z | # Model Card for Dayhoff
Dayhoff is an Atlas of both protein sequence data and generative language models — a centralized resource that brings together 3.34 billion protein sequences across 1.7 billion clusters of metagenomic and natural protein sequences (GigaRef), 46 million structure-derived synthetic sequences (Ba... | [] |
CiroN2022/sci-fi-pixels-v10 | CiroN2022 | 2026-04-18T01:30:24Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2026-04-18T01:23:43Z | # Sci-fi Pixels v1.0
## 📝 Descrizione
_Nessuna descrizione._
## ⚙️ Dati Tecnici
* **Tipo**: LORA
* **Base**: SD 1.5
* **Trigger Words**: `sci-fi_pixels`
## 🖼️ Galleria

---

---
![Sci-fi Pixels ... | [] |
BootesVoid/cmbxwm6wh027lrdqs6c7udorq_cmgo08ctl0faurqrad2tu7ekp | BootesVoid | 2025-10-12T18:25:02Z | 1 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-10-12T18:25:01Z | # Cmbxwm6Wh027Lrdqs6C7Udorq_Cmgo08Ctl0Faurqrad2Tu7Ekp
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https:... | [] |
priorcomputers/llama-3.1-8b-instruct-cn-problem-kr0.2-a1.0-creative | priorcomputers | 2026-02-03T18:45:17Z | 0 | 0 | null | [
"safetensors",
"llama",
"creativityneuro",
"llm-creativity",
"mechanistic-interpretability",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2026-02-03T18:42:59Z | # llama-3.1-8b-instruct-cn-problem-kr0.2-a1.0-creative
This is a **CreativityNeuro (CN)** modified version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
## Model Details
- **Base Model**: meta-llama/Llama-3.1-8B-Instruct
- **Modification**: CreativityNeuro weight sca... | [] |
aractingi/libero-groot-2 | aractingi | 2025-10-24T03:55:02Z | 4 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"groot",
"dataset:HuggingFaceVLA/libero",
"license:apache-2.0",
"region:us"
] | robotics | 2025-10-24T03:54:19Z | # Model Card for groot
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.... | [] |
tanaos/tanaos-spam-detection-spanish | tanaos | 2026-03-27T07:35:09Z | 78 | 1 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"spam-detection",
"content-moderation",
"small-model",
"synthetic-data",
"tanaos",
"artifex",
"es",
"dataset:tanaos/synthetic-spam-detection-dataset-spanish",
"base_model:distilbert/distilbert-base-multilingual-cased",
"ba... | text-classification | 2026-02-10T09:41:49Z | <p align="center">
<img src="https://raw.githubusercontent.com/tanaos/.github/master/assets/logo.png" width="250px" alt="Tanaos – Train task specific LLMs without training data, for offline NLP and Text Classification">
</p>
# tanaos-spam-detection-spanish: A small but performant base spam detection model specific... | [] |
frankenstein-ai/admin-god-bus-20251027t190850 | frankenstein-ai | 2025-10-27T19:09:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:QuixiAI/WizardLM-7B-Uncensored",
"base_model:finetune:QuixiAI/WizardLM-7B-Uncensored",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-10-27T19:08:50Z | # merge_output
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
... | [
{
"start": 193,
"end": 198,
"text": "SLERP",
"label": "training method",
"score": 0.7131440043449402
},
{
"start": 689,
"end": 694,
"text": "slerp",
"label": "training method",
"score": 0.8262783885002136
}
] |
YuITC/llama31-8b-ins-qlora-rank16-sft | YuITC | 2025-08-27T15:58:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-27T14:48:20Z | # Model Card for llama31-8b-ins-qlora-rank16-sft
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question... | [] |
NguyenLeDuy/arcee-vylinh-finance-gguf | NguyenLeDuy | 2026-03-19T16:35:53Z | 62 | 0 | null | [
"gguf",
"qwen2",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-19T16:35:14Z | # arcee-vylinh-finance-gguf : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `llama-cli -hf NguyenLeDuy/arcee-vylinh-finance-gguf --jinja`
- For multimodal models: `llama-mtmd-cli -hf NguyenLeDuy/arcee-vylin... | [
{
"start": 97,
"end": 104,
"text": "Unsloth",
"label": "training method",
"score": 0.8500424027442932
},
{
"start": 135,
"end": 142,
"text": "unsloth",
"label": "training method",
"score": 0.8887113928794861
},
{
"start": 433,
"end": 440,
"text": "Unsloth"... |
carmengoar/finetuned_model_emotion_detection_es | carmengoar | 2026-03-28T19:43:31Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:jhu-clsp/mmBERT-base",
"base_model:finetune:jhu-clsp/mmBERT-base",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-03-28T19:43:02Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_model_emotion_detection_es
This model is a fine-tuned version of [jhu-clsp/mmBERT-base](https://huggingface.co/jhu-clsp... | [
{
"start": 429,
"end": 437,
"text": "F1 Macro",
"label": "training method",
"score": 0.7637740969657898
},
{
"start": 1091,
"end": 1099,
"text": "F1 Macro",
"label": "training method",
"score": 0.758378803730011
}
] |
kuds/raptor-walking | kuds | 2026-03-20T01:33:34Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"reinforcement-learning",
"mujoco",
"locomotion",
"robotics",
"curriculum-learning",
"dinosaurs",
"gymnasium",
"en",
"license:mit",
"model-index",
"region:us"
] | reinforcement-learning | 2025-12-20T04:46:59Z | # **PPO** Agents for Robotic Dinosaur Locomotion — **Mesozoic Labs**

This repository contains **PPO** (Proximal Policy Optimization) agents trained to control robotic dinosaurs in MuJoCo physics simulation. Each species is trained using a 3-stage curr... | [
{
"start": 4,
"end": 7,
"text": "PPO",
"label": "training method",
"score": 0.7840896248817444
},
{
"start": 165,
"end": 168,
"text": "PPO",
"label": "training method",
"score": 0.7970394492149353
},
{
"start": 172,
"end": 200,
"text": "Proximal Policy Opt... |
rbelanec/train_copa_42_1757596069 | rbelanec | 2025-09-11T13:15:28Z | 2 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lntuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T13:12:48Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_copa_42_1757596069
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-... | [] |
CromonHarry/qwen3-14b-creativity | CromonHarry | 2025-11-11T12:15:48Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"lora",
"creativity",
"story-evaluation",
"license:apache-2.0",
"region:us"
] | null | 2025-11-11T12:15:02Z | # qwen3-14b Fine-tuned for Story Creativity Evaluation
Fine-tuned qwen3-14b model for evaluating story creativity (1-5 scale).
## Model Details
- **Base Model**: Qwen/qwen3-14b
- **Method**: LoRA (r=8, alpha=16)
- **Training Data**: 5000 story creativity evaluations
- **Epochs**: 3
- **Final Loss**: ~1.5
## Usage
`... | [] |
Muapi/halftone-glitch | Muapi | 2025-09-03T01:57:41Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-09-03T01:57:07Z | # Halftone Glitch

**Base model**: Flux.1 D
**Trained words**: halftone_glitch
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Conten... | [] |
jarif/Multimodal-BNEN-Fake-News-Scanner-Model | jarif | 2025-08-10T19:32:56Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"clip",
"zero-shot-image-classification",
"fake-news-detection",
"multimodal",
"bangla",
"english",
"supervised-learning",
"fact-checking",
"image-text",
"misinformation",
"fine-tuned",
"bn",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region... | zero-shot-image-classification | 2025-08-10T19:01:57Z | # 🛡️ Multimodal BN-EN Fake News Scanner
A **fine-tuned CLIP model for detecting fake news in Bangla-English (BN-EN) content** using **text and image analysis**.
This model was **supervised-trained on real and fake news pairs** to better detect misinformation in South Asian digital content. During inference, it uses ... | [] |
PleIAs/Pleias-RAG-350M | PleIAs | 2025-05-09T14:53:15Z | 254 | 32 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"fr",
"it",
"de",
"es",
"arxiv:2504.18225",
"base_model:PleIAs/Pleias-350m-Preview",
"base_model:finetune:PleIAs/Pleias-350m-Preview",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"deploy:azu... | text-generation | 2025-04-07T08:38:39Z | # Pleias-RAG-350m
<div align="center">
<img src="figures/pleias.jpg" width="60%" alt="Pleias" />
</div>
<p align="center">
<a href="https://huggingface.co/papers/2504.18225"><b>Full model report</b></a>
</p>
**Pleias-RAG-350M** is a 350 million parameters Small Reasoning Model, trained for retrieval-augmented ge... | [] |
CelesteImperia/Phi-3.5-mini-instruct-OpenVINO-INT8 | CelesteImperia | 2026-03-25T18:39:28Z | 0 | 0 | openvino | [
"openvino",
"phi3",
"nncf",
"int8",
"phi-3.5",
"celeste-imperia",
"text-generation",
"conversational",
"custom_code",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:finetune:microsoft/Phi-3.5-mini-instruct",
"license:mit",
"region:us"
] | text-generation | 2026-03-25T18:37:15Z | # Phi-3.5-mini-instruct-OpenVINO-INT8 (Silver Series)



[.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = ... | [] |
gnielly/nanoVLM-222M | gnielly | 2026-04-24T18:31:38Z | 0 | 0 | nanovlm | [
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"smollm2",
"siglip",
"en",
"license:mit",
"region:us"
] | null | 2026-04-24T18:31:22Z | ---
language: en
license: mit
library_name: nanovlm
tags:
- vision-language
- multimodal
- smollm2
- siglip
---
# nanoVLM - gnielly/nanoVLM-222M
This is a nano Vision-Language Model (nanoVLM) trained as part of the COM-304 course.
## Model Description
The model consists of three main components:
- **Vision Backbone*... | [
{
"start": 217,
"end": 231,
"text": "COM-304 course",
"label": "training method",
"score": 0.8666342496871948
}
] |
imShub10/msgsense-sms-bert-base-cleanaddr-fulldata-20260424 | imShub10 | 2026-04-24T15:48:25Z | 0 | 0 | transformers | [
"transformers",
"tflite",
"onnx",
"safetensors",
"bert",
"text-classification",
"sms",
"sms-classification",
"clean-address",
"bert-base",
"msgsense",
"en",
"base_model:google-bert/bert-base-uncased",
"base_model:quantized:google-bert/bert-base-uncased",
"license:mit",
"text-embeddings... | text-classification | 2026-04-24T05:09:21Z | # MsgSense SMS Classifier
This model predicts a composite label in the format:
`<score>_<sms_type_id>`
- `score` (first digit): message importance used by app policy.
- `sms_type_id` (second part): category id from `SmsClassificationTypeEntity` mapping.
## Input Format
`Sender: <clean_address> | Message: <normaliz... | [] |
chenyuming/medgemma-4b-it-sft-lora-922 | chenyuming | 2025-09-23T08:17:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T06:34:03Z | # Model Card for medgemma-4b-it-sft-lora-922
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time mach... | [] |
amps93/qwen3-tts-finetune-korean-woman-v6-epoch-8 | amps93 | 2026-03-18T05:15:18Z | 14 | 0 | null | [
"safetensors",
"qwen3_tts",
"arxiv:2601.15621",
"license:apache-2.0",
"region:us"
] | null | 2026-03-18T05:14:52Z | # Qwen3-TTS
## Overview
### Introduction
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-TTS-Repo/qwen3_tts_introduction.png" width="90%"/>
<p>
Qwen3-TTS covers 10 major languages (Chinese, English, Japanese, Korean, German, French, Russian, Portuguese, Spanish, and Italian) as... | [] |
JamesANZ/auslegal-slm | JamesANZ | 2025-12-27T23:35:31Z | 3 | 1 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"legal",
"australia",
"law",
"causal-lm",
"domain-adapted",
"slm",
"distilgpt2",
"en",
"dataset:custom",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:mit",
"model-index",
"text-ge... | text-generation | 2025-11-17T11:03:02Z | # Australian Legal Small Language Model (SLM)
A domain-specific Small Language Model fine-tuned on Australian legal documents from AustLII. This model is based on DistilGPT2 and has been adapted to generate text in the style of Australian legal documents.
## Model Details
### Model Description
- **Model type**: GPT... | [] |
mradermacher/Pathumma-ThaiLLM-qwen3-8b-it-2.0.0-GGUF | mradermacher | 2025-12-24T15:56:20Z | 60 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:nectec/Pathumma-ThaiLLM-qwen3-8b-it-2.0.0",
"base_model:quantized:nectec/Pathumma-ThaiLLM-qwen3-8b-it-2.0.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-24T15:47:07Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
rbelanec/train_math_qa_1754507505 | rbelanec | 2025-08-07T07:13:50Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lntuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-08-07T03:17:27Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_math_qa_1754507505
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-... | [] |
microsoft/git-base-textcaps | microsoft | 2023-02-08T10:49:59Z | 258 | 9 | transformers | [
"transformers",
"pytorch",
"git",
"image-text-to-text",
"vision",
"image-captioning",
"image-to-text",
"en",
"arxiv:2205.14100",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-to-text | 2022-12-06T09:34:29Z | # GIT (GenerativeImage2Text), base-sized, fine-tuned on TextCaps
GIT (short for GenerativeImage2Text) model, base-sized version, fine-tuned on TextCaps. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first releas... | [
{
"start": 2,
"end": 5,
"text": "GIT",
"label": "training method",
"score": 0.8923086524009705
},
{
"start": 66,
"end": 69,
"text": "GIT",
"label": "training method",
"score": 0.8788853883743286
},
{
"start": 186,
"end": 189,
"text": "GIT",
"label": "t... |
The-Models/my-gpt-from-scratch | The-Models | 2026-04-18T00:05:41Z | 0 | 0 | null | [
"region:us"
] | null | 2026-04-18T00:05:15Z | # My GPT — Text Generation from Scratch
A 30M-parameter GPT-style transformer built from scratch in PyTorch, trained on Shakespeare + Alpaca + OpenWebText, with a Flask streaming chat interface.
## Project Structure
```
ai-model-by-me/
├── model.py # GPT architecture (multi-head attention, transformer block... | [] |
laion/r2egym-stackseq | laion | 2025-12-14T19:10:57Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-06T11:50:08Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# r2egym-stackseq
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the penfever/glm-4... | [] |
Francisco2333/swin-tiny-patch4-window7-224-finetuned-eurosat | Francisco2333 | 2025-12-30T15:34:33Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-12-30T15:13:59Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](htt... | [] |
TheCluster/GLM-4.6V-Flash-Heretic-MLX-mxfp8 | TheCluster | 2026-02-27T01:33:35Z | 215 | 0 | mlx | [
"mlx",
"safetensors",
"glm4v",
"heretic",
"uncensored",
"unrestricted",
"decensored",
"abliterated",
"mxfp8",
"image-text-to-text",
"conversational",
"en",
"zh",
"base_model:AiAsistent/GLM-4.6V-Flash-heretic",
"base_model:quantized:AiAsistent/GLM-4.6V-Flash-heretic",
"license:mit",
"... | image-text-to-text | 2026-02-26T02:07:40Z | <div align="center">
<img src=https://raw.githubusercontent.com/zai-org/GLM-V/refs/heads/main/resources/logo.svg width="40%"/>
</div>
# GLM-4.6V-Flash Heretic MLX mxfp8
# This is a decensored version of [zai-org/GLM-4.6V-Flash](https://huggingface.co/zai-org/GLM-4.6V-Flash), made using [Heretic](https://github.com/p-... | [] |
contemmcm/d4642015f9fa8db06d31232a6745c19f | contemmcm | 2025-10-31T15:17:57Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-10-31T15:15:03Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# d4642015f9fa8db06d31232a6745c19f
This model is a fine-tuned version of [distilbert/distilroberta-base](https://huggingface.co/dis... | [
{
"start": 512,
"end": 520,
"text": "F1 Macro",
"label": "training method",
"score": 0.7560166716575623
},
{
"start": 1334,
"end": 1342,
"text": "F1 Macro",
"label": "training method",
"score": 0.7247026562690735
}
] |
onnx-community/chinese-roberta-wwm-ext-ONNX | onnx-community | 2026-03-13T02:45:53Z | 23 | 0 | transformers.js | [
"transformers.js",
"onnx",
"bert",
"fill-mask",
"zh",
"arxiv:1906.08101",
"arxiv:2004.13922",
"base_model:hfl/chinese-roberta-wwm-ext",
"base_model:quantized:hfl/chinese-roberta-wwm-ext",
"license:apache-2.0",
"region:us"
] | fill-mask | 2026-03-13T02:45:42Z | # chinese-roberta-wwm-ext (ONNX)
This is an ONNX version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext). It was automatically converted and uploaded using [this Hugging Face Space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
## Usage with Transformers.js
Se... | [] |
yuk1chan/qwen3-4b-structeval-v5-5kmix-merged-sft | yuk1chan | 2026-02-07T12:51:49Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v5",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-07T12:51:36Z | qwen3-4b-structeval-v5-5kmix-merged-sft
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to imp... | [
{
"start": 141,
"end": 146,
"text": "QLoRA",
"label": "training method",
"score": 0.8208125233650208
},
{
"start": 582,
"end": 587,
"text": "QLoRA",
"label": "training method",
"score": 0.7087616324424744
}
] |
onnx-community/distilbart-mnli-12-3-ONNX | onnx-community | 2025-09-01T11:36:27Z | 6 | 0 | transformers.js | [
"transformers.js",
"onnx",
"bart",
"text-classification",
"base_model:valhalla/distilbart-mnli-12-3",
"base_model:quantized:valhalla/distilbart-mnli-12-3",
"region:us"
] | text-classification | 2025-08-18T09:52:45Z | # distilbart-mnli-12-3 (ONNX)
This is an ONNX version of [valhalla/distilbart-mnli-12-3](https://huggingface.co/valhalla/distilbart-mnli-12-3). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
## Usage (Transformers.js)
If you haven't alrea... | [] |
zebby09/diaz_jaquet_qwen_v5-lora | zebby09 | 2025-10-04T15:38:09Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:Qwen/Qwen-Image",
"base_model:adapter:Qwen/Qwen-Image",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2025-10-04T15:36:53Z | # diaz_jaquet_qwen_v5-lora
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
## Trigger words
No trigger words defined.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](zebby09/... | [] |
FlameF0X/NanoSR-6x | FlameF0X | 2026-05-01T19:48:09Z | 0 | 1 | null | [
"upscale",
"image-to-image",
"dataset:FlameF0X/NanoSR",
"license:apache-2.0",
"region:us"
] | image-to-image | 2026-05-01T09:18:29Z | # NanoSR-6x
I'm to lazy to make the model card and too lazy to put ai to make it.


# Usa... | [] |
NikolayKozloff/GigaChat3-10B-A1.8B-Q6_K-GGUF | NikolayKozloff | 2025-12-03T02:37:20Z | 22 | 1 | transformers | [
"transformers",
"gguf",
"moe",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"ru",
"en",
"base_model:ai-sage/GigaChat3-10B-A1.8B",
"base_model:quantized:ai-sage/GigaChat3-10B-A1.8B",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-03T02:36:42Z | # NikolayKozloff/GigaChat3-10B-A1.8B-Q6_K-GGUF
This model was converted to GGUF format from [`ai-sage/GigaChat3-10B-A1.8B`](https://huggingface.co/ai-sage/GigaChat3-10B-A1.8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](... | [] |
taillades/act-place-ball-320x240 | taillades | 2026-04-12T07:15:14Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:taillades/so101-place-ball-320x240",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-12T07:12:59Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
henryliang3027/Qwen2.5-VL-3B-Custom2 | henryliang3027 | 2025-11-05T08:22:44Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-11-05T00:23:33Z | # Model Card for Qwen2.5-VL-3B-Custom2
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a tim... | [
{
"start": 731,
"end": 735,
"text": "GRPO",
"label": "training method",
"score": 0.711748480796814
}
] |
Pranov888/EEG_depression_detection | Pranov888 | 2026-03-13T17:09:59Z | 0 | 0 | null | [
"eeg",
"depression",
"mental-health",
"mdd",
"biosignals",
"ensemble",
"pytorch",
"xgboost",
"svm",
"neuroscience",
"dataset:figshare-eeg-depression",
"license:mit",
"model-index",
"region:us"
] | null | 2026-03-13T16:54:05Z | # EEG-Based Depression (MDD) Detection — V4 Ensemble
**Leave-One-Subject-Out (LOSO) cross-validated EEG classifier for Major Depressive
Disorder, achieving 96.88 % subject-level accuracy and 99.80 % AUC-ROC on 64 subjects
from the public figshare EEG dataset.**
---
## Model Architecture
This is a **3-model heteroge... | [
{
"start": 542,
"end": 549,
"text": "XGBoost",
"label": "training method",
"score": 0.7019832730293274
}
] |
Adanato/Meta-Llama-3-8B-Instruct_qwen25_qwen3_rank_only-qwen25_qwen3_rank_only_cluster_5 | Adanato | 2026-02-11T15:16:34Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"text-generation-inference",
"endpoint... | text-generation | 2026-02-11T15:14:09Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Meta-Llama-3-8B-Instruct_e1_qwen25_qwen3_rank_only_cluster_5
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-In... | [] |
sfutenma/dpo-qwen3_4b-cot-merged_v260302-093913 | sfutenma | 2026-03-02T00:41:59Z | 46 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"dpo",
"unsloth",
"qwen",
"alignment",
"conversational",
"en",
"dataset:u-10bei/dpo-dataset-qwen-cot",
"base_model:unsloth/Qwen3-4B-Instruct-2507",
"base_model:finetune:unsloth/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"text... | text-generation | 2026-03-02T00:39:14Z | # dpo-qwen3_4b-cot-merged_v260302-093913
This model is a fine-tuned version of **unsloth/Qwen3-4B-Instruct-2507** using **Direct Preference Optimization (DPO)** via the **Unsloth** library.
This repository contains the **full-merged 16-bit weights**. No adapter loading is required.
## Training Objective
This model h... | [
{
"start": 123,
"end": 153,
"text": "Direct Preference Optimization",
"label": "training method",
"score": 0.8162304759025574
},
{
"start": 155,
"end": 158,
"text": "DPO",
"label": "training method",
"score": 0.8166018128395081
},
{
"start": 344,
"end": 347,
... |
ziqiwangsilvia/gemma-product-description | ziqiwangsilvia | 2026-01-19T12:22:59Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-4b-pt",
"base_model:finetune:google/gemma-3-4b-pt",
"endpoints_compatible",
"region:us"
] | null | 2026-01-19T11:56:57Z | # Model Card for gemma-product-description
This model is a fine-tuned version of [google/gemma-3-4b-pt](https://huggingface.co/google/gemma-3-4b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine,... | [] |
Developer9215/roberta-base-klue-ynat-classification | Developer9215 | 2025-08-05T02:23:10Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:klue/roberta-base",
"base_model:finetune:klue/roberta-base",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-08-05T02:22:55Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-klue-ynat-classification
This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/rober... | [] |
PushkarA07/segformer-b0-finetuned-net-4Sep | PushkarA07 | 2025-10-17T04:54:03Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:PushkarA07/segformer-b0-finetuned-net-4Sep",
"base_model:finetune:PushkarA07/segformer-b0-finetuned-net-4Sep",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2025-09-04T17:10:22Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-net-4Sep
This model is a fine-tuned version of [PushkarA07/segformer-b0-finetuned-net-4Sep](https://huggin... | [] |
sekarkrishna/finbert-int8 | sekarkrishna | 2026-03-21T03:17:30Z | 10 | 0 | onnxruntime | [
"onnxruntime",
"onnx",
"bert",
"int8",
"quantized",
"finance",
"embeddings",
"justembed",
"feature-extraction",
"arxiv:1908.10063",
"base_model:ProsusAI/finbert",
"base_model:quantized:ProsusAI/finbert",
"license:apache-2.0",
"region:us"
] | feature-extraction | 2026-03-21T03:17:06Z | # FinBERT INT8 — ONNX Quantized
ONNX INT8 quantized version of [ProsusAI/finbert](https://huggingface.co/ProsusAI/finbert) for efficient financial text embeddings.
## Model Details
| Property | Value |
|----------|-------|
| Base Model | [ProsusAI/finbert](https://huggingface.co/ProsusAI/finbert) |
| Format | ONNX |... | [] |
samwu0217/act_toy_2 | samwu0217 | 2025-08-15T08:36:43Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:samwu0217/toy_2",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-08-15T08:35:27Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
Muapi/jj-s-interior-space-office-flux-v1 | Muapi | 2025-08-28T17:09:08Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-28T17:08:48Z | # JJ's Interior Space - Office - Flux v1

**Base model**: Flux.1 D
**Trained words**: Office
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
head... | [] |
Qwen/Qwen2.5-0.5B-Instruct-GGUF | Qwen | 2024-09-20T06:20:24Z | 63,525 | 81 | null | [
"gguf",
"chat",
"text-generation",
"en",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-17T13:57:41Z | # Qwen2.5-0.5B-Instruct-GGUF
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **mo... | [] |
codingdawg/qwen2-7b-instruct-trl-sft-ChartQA | codingdawg | 2025-10-13T22:31:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-10-13T21:33:56Z | # Model Card for qwen2-7b-instruct-trl-sft-ChartQA
This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you h... | [] |
Kaito-F/qwen3-4b-grpo-v4 | Kaito-F | 2026-02-21T17:53:16Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"grpo",
"reinforcement-learning",
"agent",
"tool-use",
"alfworld",
"dbbench",
"conversational",
"en",
"dataset:u-10bei/sft_alfworld_trajectory_dataset_v5",
"dataset:u-10bei/dbbench_sft_dataset_react_v4",
"dataset:u-10bei/dbbench_... | text-generation | 2026-02-21T17:51:49Z | # GRPO-tuned Agent Model (v2)
This model is fine-tuned from **Kaito-F/qwen3-4b-grpo-format-corrected** using
**GRPO (Group Relative Policy Optimization)** with Unsloth.
## Training Details
- **Method**: GRPO with 4-bit quantized training + LoRA
- **Base model**: Kaito-F/qwen3-4b-grpo-format-corrected (SFT-tuned)
- *... | [] |
sergeyzh/BERTA | sergeyzh | 2025-03-10T09:41:08Z | 10,293 | 38 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"russian",
"pretraining",
"embeddings",
"sentence-similarity",
"transformers",
"ru",
"en",
"dataset:IlyaGusev/gazeta",
"dataset:zloelias/lenta-ru",
"dataset:HuggingFaceFW/fineweb-2",
"dataset:HuggingFaceFW/fineweb",
... | sentence-similarity | 2025-03-10T09:39:08Z | ## BERTA
Модель для расчетов эмбеддингов предложений на русском и английском языках получена методом дистилляции эмбеддингов [ai-forever/FRIDA](https://huggingface.co/ai-forever/FRIDA) (размер эмбеддингов - 1536, слоёв - 24) в [sergeyzh/LaBSE-ru-turbo](https://huggingface.co/sergeyzh/LaBSE-ru-turbo) (размер эмбеддин... | [
{
"start": 381,
"end": 392,
"text": "CLS pooling",
"label": "training method",
"score": 0.7895903587341309
},
{
"start": 404,
"end": 416,
"text": "mean pooling",
"label": "training method",
"score": 0.8555275201797485
}
] |
kimartmii/kim_Lora | kimartmii | 2026-03-22T21:26:14Z | 5 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"re... | text-to-image | 2026-03-22T20:55:58Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - kimartmii/kim_Lora
<Gallery />
## Model description
These are kimartmii/kim_Lora LoRA adaption ... | [
{
"start": 204,
"end": 208,
"text": "LoRA",
"label": "training method",
"score": 0.7406560778617859
},
{
"start": 306,
"end": 310,
"text": "LoRA",
"label": "training method",
"score": 0.8177324533462524
},
{
"start": 453,
"end": 457,
"text": "LoRA",
"l... |
JunnDooChoi/slurm_act_libero_spatial_finetuned_fourierkan_64_3 | JunnDooChoi | 2026-04-29T22:33:26Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:fracapuano/libero_spatial",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-29T22:33:08Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
mlx-community/LongCat-AudioDiT-1B-4bit | mlx-community | 2026-03-31T00:28:03Z | 0 | 0 | mlx-audio | [
"mlx-audio",
"safetensors",
"longcat_audiodit",
"mlx",
"text-to-speech",
"speech",
"speech generation",
"voice cloning",
"tts",
"4-bit",
"region:us"
] | text-to-speech | 2026-03-30T23:28:42Z | # mlx-community/LongCat-AudioDiT-1B-4bit
This model was converted to MLX format from [`meituan-longcat/LongCat-AudioDiT-1B`](https://huggingface.co/meituan-longcat/LongCat-AudioDiT-1B) using mlx-audio version **0.4.3**.
Refer to the [original model card](https://huggingface.co/meituan-longcat/LongCat-AudioDiT-1B) for... | [] |
Overworld/Waypoint-1.1-Small | Overworld | 2026-03-10T15:55:49Z | 446 | 8 | null | [
"safetensors",
"WM",
"Diffusion",
"Egocentric",
"en",
"license:apache-2.0",
"region:us"
] | null | 2026-01-30T03:46:11Z | Waypoint-1.1-Small is a 2.3 billion parameter control-and-text-conditioned causal diffusion model. It is a transformer architecture utilizing rectified flow, distilled via self forcing with DMD. The model can autoregressively generate new frames given historical frames, actions, and text.
Waypoint-1.1-Small is a conti... | [
{
"start": 172,
"end": 184,
"text": "self forcing",
"label": "training method",
"score": 0.8400659561157227
},
{
"start": 390,
"end": 402,
"text": "Self Forcing",
"label": "training method",
"score": 0.7849038243293762
}
] |
mradermacher/SAGE-MM-Qwen2.5-VL-7B-SFT-i1-GGUF | mradermacher | 2025-12-17T11:21:25Z | 44 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:allenai/SAGE-MM-SFT-417K",
"base_model:allenai/SAGE-MM-Qwen2.5-VL-7B-SFT",
"base_model:quantized:allenai/SAGE-MM-Qwen2.5-VL-7B-SFT",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-12-17T10:45:05Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
a3ilab-llm-uncertainty/Qwen3_8B_apigen_mt_llama_factory | a3ilab-llm-uncertainty | 2026-01-06T16:36:39Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen3-8B",
"llama-factory",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-8B",
"license:other",
"region:us"
] | text-generation | 2026-01-06T16:35:44Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen3_8B_apigen_mt
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the apigen-mt_5... | [] |
contemmcm/88fa6a8707203ed1cdba313637b52638 | contemmcm | 2025-10-30T07:38:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/mbart-large-50-one-to-many-mmt",
"base_model:finetune:facebook/mbart-large-50-one-to-many-mmt",
"endpoints_compatible",
"region:us"
] | null | 2025-10-30T07:10:37Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 88fa6a8707203ed1cdba313637b52638
This model is a fine-tuned version of [facebook/mbart-large-50-one-to-many-mmt](https://huggingf... | [] |
groxaxo/Qwen3-4B-Instruct-2507-heretic-W4A16 | groxaxo | 2026-02-17T23:25:15Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"awq",
"w4a16",
"quantization",
"vllm",
"llmcompressor",
"conversational",
"base_model:heretic-org/Qwen3-4B-Instruct-2507-heretic",
"base_model:quantized:heretic-org/Qwen3-4B-Instruct-2507-heretic",
"text-generation-inference",
"en... | text-generation | 2026-02-17T23:23:37Z | # Qwen3-4B-Instruct-2507-heretic-W4A16
## Model Description
This is a **4-bit** quantized version of `heretic-org/Qwen3-4B-Instruct-2507-heretic` using the **W4A16** scheme (AWQ-compatible).
It was quantized using `llmcompressor` and is compatible with vLLM.
## Evaluation Results (Perplexity)
Evaluated on **WikiT... | [] |
asifdotpy/vetta-granite-2b-packaged-v3 | asifdotpy | 2025-12-03T08:08:13Z | 1 | 1 | null | [
"safetensors",
"granite",
"interviewer",
"ai-interviewer",
"vetta",
"packaged",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-12-03T08:07:01Z | # Vetta Granite Interviewer - Packaged Model v3
This repository contains a packaged version of the Vetta AI interviewer model, ready for production deployment.
## Model Details
- **Base Model**: ibm-granite/granite-3.0-2b-instruct
- **LoRA Adapter**: asifdotpy/vetta-granite-2b-lora-v3
- **Merged**: Yes (LoRA adapters... | [] |
huihui-ai/Huihui-MiniMax-M2.7-BF16-abliterated-GGUF | huihui-ai | 2026-04-28T05:21:25Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"abliterated",
"uncensored",
"GGUF",
"text-generation",
"base_model:MiniMaxAI/MiniMax-M2.7",
"base_model:quantized:MiniMaxAI/MiniMax-M2.7",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2026-04-27T07:55:27Z | # huihui-ai/Huihui-MiniMax-M2.7-abliterated-GGUF
This is an uncensored version of [MiniMaxAI/MiniMax-M2.7](https://huggingface.co/MiniMaxAI/MiniMax-M2.7) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
**Note... | [] |
utter-project/TowerVideo-2B | utter-project | 2025-10-28T19:26:00Z | 38 | 3 | transformers | [
"transformers",
"safetensors",
"llava_onevision",
"image-text-to-text",
"multimodal",
"multilingual",
"vlm",
"translation",
"video-text-to-text",
"en",
"de",
"nl",
"es",
"fr",
"pt",
"uk",
"hi",
"zh",
"ru",
"cs",
"ko",
"ja",
"it",
"pl",
"ro",
"nb",
"nn",
"arxiv:2... | video-text-to-text | 2025-10-14T09:31:40Z | # Model Card for TowerVideo
<p align="left">
<img src="Tower.png" alt="TowerVision Logo" width="200">
</p>
TowerVision is a family of open-source multilingual vision-language models with strong capabilities optimized for a variety of vision-language use cases, including image captioning, visual understanding, summari... | [] |
KenWu/LeLM-GGUF | KenWu | 2026-02-26T01:52:46Z | 29 | 0 | null | [
"gguf",
"lora-merged",
"nba",
"sports-analysis",
"qwen3",
"text-generation",
"base_model:Qwen/Qwen3-8B",
"base_model:quantized:Qwen/Qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2026-02-26T01:49:28Z | # LeLM-GGUF
GGUF quantization of [KenWu/LeLM](https://huggingface.co/KenWu/LeLM), an NBA take analysis model fine-tuned on Qwen3-8B.
## Available Quantizations
| File | Quant | Size | Description |
|---|---|---|---|
| `LeLM-Q4_K_M.gguf` | Q4_K_M | 4.7 GB | Best balance of quality and size |
## Usage with Ollama
Cr... | [] |
ankitklakra/kurukh-to-hindi | ankitklakra | 2025-12-13T14:51:44Z | 8 | 0 | null | [
"safetensors",
"mt5",
"translation",
"low-resource",
"kurukh",
"oraon",
"kru",
"hi",
"dataset:bharatavani-dictionary",
"dataset:community-data",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"region:us"
] | translation | 2025-12-06T09:40:29Z | # 🇮🇳 Kurukh (Oraon) to Hindi Translator
This is a **sequence-to-sequence transformer model** designed to translate the low-resource **Kurukh (Oraon)** language into **Hindi**. It has been fine-tuned on the **Google mT5-small** architecture using a custom dataset of approximately **10,000 sentence pairs**.
## 📊 Mod... | [] |
Thireus/Qwen3-VL-235B-A22B-Instruct-THIREUS-Q4_K_R4-SPECIAL_SPLIT | Thireus | 2026-02-12T17:55:45Z | 2 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-11-07T06:10:21Z | ## ⚠️ Cautionary Notice
The metadata of these quants has been updated and is now compatible with the latest version of `llama.cpp` (and `ik_llama.cpp`).
- ⚠️ **Official support in `llama.cpp` was recently made available** – see [ggml-org/llama.cpp PR #16780](http://github.com/ggml-org/llama.cpp/pull/16780).
- ⚠️ **Of... | [] |
lmstudio-community/Qwen3-4B-Instruct-2507-MLX-4bit | lmstudio-community | 2025-08-06T14:37:05Z | 64,425 | 3 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"mlx",
"conversational",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:quantized:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-generation | 2025-08-06T14:36:38Z | ## 💫 Community Model> Qwen3-4B-Instruct-2507 by Qwen
_👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)_.
**Model creator**: [Qwen](https://huggingface.co/Qwen)<br>
**Origin... | [] |
ludde73865/069ce524-8023-4684-9403-199281a64b07 | ludde73865 | 2026-03-04T11:35:03Z | 28 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:c299m/tomato_grasping_rgb_v1",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-04T11:34:39Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
gorni123/results | gorni123 | 2025-11-15T22:16:31Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-15T22:16:06Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
## Model de... | [] |
deepseek-ai/deepseek-vl2 | deepseek-ai | 2024-12-18T08:18:21Z | 3,566 | 379 | transformers | [
"transformers",
"safetensors",
"deepseek_vl_v2",
"image-text-to-text",
"arxiv:2412.10302",
"license:other",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-12-13T09:06:44Z | ## 1. Introduction
Introducing DeepSeek-VL2, an advanced series of large Mixture-of-Experts (MoE) Vision-Language Models that significantly improves upon its predecessor, DeepSeek-VL. DeepSeek-VL2 demonstrates superior capabilities across various tasks, including but not limited to visual question answering, optical c... | [] |
mlx-community/mistralai_Devstral-Small-2-24B-Instruct-2512-MLX-8Bit | mlx-community | 2025-12-14T00:57:36Z | 1,933 | 6 | mlx | [
"mlx",
"safetensors",
"mistral3",
"mistral-common",
"text-generation",
"conversational",
"base_model:mistralai/Devstral-Small-2-24B-Instruct-2512",
"base_model:quantized:mistralai/Devstral-Small-2-24B-Instruct-2512",
"license:apache-2.0",
"8-bit",
"region:us"
] | text-generation | 2025-12-14T00:56:34Z | # mlx-community/mistralai_Devstral-Small-2-24B-Instruct-2512-MLX-8Bit
This model [mlx-community/mistralai_Devstral-Small-2-24B-Instruct-2512-MLX-8Bit](https://huggingface.co/mlx-community/mistralai_Devstral-Small-2-24B-Instruct-2512-MLX-8Bit) was
converted to MLX format from [mistralai/Devstral-Small-2-24B-Instruct-25... | [] |
mudler/Carnice-Qwen3.6-MoE-35B-A3B-APEX-GGUF | mudler | 2026-04-27T13:59:41Z | 9,874 | 11 | null | [
"gguf",
"quantized",
"apex",
"moe",
"mixture-of-experts",
"qwen3",
"carnice",
"agentic",
"tool-calling",
"base_model:samuelcardillo/Carnice-Qwen3.6-MoE-35B-A3B",
"base_model:quantized:samuelcardillo/Carnice-Qwen3.6-MoE-35B-A3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
... | null | 2026-04-20T14:06:05Z | <!-- apex-banner-v2 -->
<div style="background-color: #f59e0b; color: white; padding: 20px; border-radius: 10px; text-align: center; margin: 20px 0;">
<h2 style="color: white; margin: 0 0 10px 0;">⚡ Each donation = another big MoE quantized</h2>
<p style="font-size: 18px; margin: 0 0 15px 0;">I host <b>25+ free APEX Mo... | [] |
jimswagner/garbage_classifier | jimswagner | 2026-01-27T01:14:27Z | 4 | 0 | null | [
"image-classification",
"pytorch",
"license:mit",
"region:us"
] | image-classification | 2026-01-26T20:12:50Z | # Garbage Classifier (7-class)
This repository contains a PyTorch image classifier trained to predict 1 of 7 classes:
- battery, biological, cardboard, glass, metal, paper, plastic
## Files
- `model.py`: model architecture definition(s)
- `model.pth`: trained weights (PyTorch state_dict)
- `classes.json`: i... | [] |
shotalab/Qwen3-4B-Instruct-SFT-03-LoRA | shotalab | 2026-02-21T07:06:19Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-05T15:22:05Z | # Qwen3-4B-Instruct-SFT-03-LoRA
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **s... | [
{
"start": 133,
"end": 138,
"text": "QLoRA",
"label": "training method",
"score": 0.8461037278175354
},
{
"start": 574,
"end": 579,
"text": "QLoRA",
"label": "training method",
"score": 0.7575345039367676
}
] |
qualcomm/Midas-V2 | qualcomm | 2026-04-28T06:56:56Z | 458 | 10 | pytorch | [
"pytorch",
"android",
"depth-estimation",
"arxiv:1907.01341",
"license:other",
"region:us"
] | depth-estimation | 2024-05-29T00:46:00Z | 
# Midas-V2: Optimized for Qualcomm Devices
Midas is designed for estimating depth at each point in an image.
This is based on the implementation of Midas-V2 found [here](https://github.com/isl-org/MiDaS... | [] |
mistralrs-community/gemma-4-31B-it-UQFF | mistralrs-community | 2026-04-02T15:33:09Z | 0 | 0 | null | [
"gemma4",
"uqff",
"mistral.rs",
"base_model:google/gemma-4-31B-it",
"base_model:quantized:google/gemma-4-31B-it",
"region:us"
] | null | 2026-04-02T13:45:41Z | # `google/gemma-4-31B-it`, UQFF quantization
Run with [mistral.rs](https://github.com/EricLBuehler/mistral.rs). Documentation: [UQFF docs](https://ericlbuehler.github.io/mistral.rs/UQFF.html).
1) **Flexible** 🌀: Multiple quantization formats in *one* file format with *one* framework to run them all.
2) **Reliable** ... | [] |
PanzerBread/PromptCoT | PanzerBread | 2025-11-23T16:24:41Z | 3 | 1 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"lora",
"transformers",
"promptcot",
"chain-of-thought",
"mathematical-reasoning",
"unsloth",
"text-generation",
"arxiv:2509.19894",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"region:us"
] | text-generation | 2025-11-14T18:47:40Z | # PromptCoT 2.0 - Prompt Model (pθ)
This is the **Prompt Model (pθ)** from the PromptCoT 2.0 implementation, trained using Expectation-Maximization (EM) algorithm to generate challenging mathematical problems given concepts and rationales.
## Model Details
### Model Description
This model is part of a dual-model sy... | [
{
"start": 613,
"end": 619,
"text": "E-step",
"label": "training method",
"score": 0.7526780366897583
},
{
"start": 692,
"end": 698,
"text": "M-step",
"label": "training method",
"score": 0.7460260391235352
}
] |
NoahAdamson/silma-ai.SILMA-Kashif-2B-Instruct-v1.0-GGUF | NoahAdamson | 2026-03-30T01:32:37Z | 30 | 0 | null | [
"gguf",
"text-generation",
"base_model:silma-ai/SILMA-Kashif-2B-Instruct-v1.0",
"base_model:quantized:silma-ai/SILMA-Kashif-2B-Instruct-v1.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2026-03-30T01:32:37Z | [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [silma-ai/SILMA-Kashif-2B-Instruct-v1.0](https://huggingface.co/silma-ai/SILMA-Kashif-2B-Instruct-v1.0)
'Make knowledge free for everyone'
<p align="c... | [] |
mradermacher/turkish-llm-14b-instruct-i1-GGUF | mradermacher | 2026-03-21T23:32:43Z | 8,145 | 1 | transformers | [
"transformers",
"gguf",
"turkish",
"qwen2",
"instruction-tuned",
"sft",
"qlora",
"tr",
"reasoning",
"conversational",
"low-resource",
"turkish-nlp",
"en",
"dataset:ogulcanaydogan/Turkish-LLM-v10-Training",
"base_model:ogulcanaydogan/Turkish-LLM-14B-Instruct",
"base_model:quantized:ogul... | null | 2026-03-06T19:06:36Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
qing-yao/relfreq_n10000_nb300k_70m_ep10_lr1e-4_seed42 | qing-yao | 2025-12-29T02:42:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"base_model:finetune:EleutherAI/pythia-70m",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-29T02:41:08Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# relfreq_n10000_nb300k_70m_ep10_lr1e-4_seed42
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co... | [] |
robertp408/wav2vec2-large-mms-1b-aft-led | robertp408 | 2025-10-07T16:59:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:audiofolder",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-09-29T05:16:00Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-mms-1b-aft-led
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-... | [] |
TheCluster/Qwen3.5-9B-Heretic-MLX-mxfp4 | TheCluster | 2026-03-03T03:45:16Z | 2,367 | 5 | mlx | [
"mlx",
"safetensors",
"qwen3_5",
"heretic",
"uncensored",
"unrestricted",
"decensored",
"abliterated",
"image-text-to-text",
"conversational",
"base_model:darkc0de/Qwen3.5-9B-heretic",
"base_model:quantized:darkc0de/Qwen3.5-9B-heretic",
"license:apache-2.0",
"4-bit",
"region:us"
] | image-text-to-text | 2026-03-03T03:42:16Z | <div align="center"><img width="400px" src="https://qianwen-res.oss-accelerate.aliyuncs.com/logo_qwen3.5.png"></div>
# Qwen3.5-9B Heretic MLX mxfp4
### This is a decensored version of [Qwen/Qwen3.5-9B](https://huggingface.co/Qwen/Qwen3.5-9B), made using [Heretic](https://github.com/p-e-w/heretic) v1.2.0 with Magnitud... | [] |
hbseong/internvla_pick_and_place_pos5_ep208_nofilter_so101_pt-ft-3ep | hbseong | 2025-11-28T14:38:25Z | 1 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"internvla",
"dataset:hbseong/record-pick-and-place-pos5-so101",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-28T14:38:03Z | # Model Card for internvla
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingf... | [] |
X-GenGroup/PaCo-FLUX.1-dev-Lora | X-GenGroup | 2025-12-06T03:31:06Z | 13 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"arxiv:2512.04784",
"license:apache-2.0",
"region:us"
] | text-to-image | 2025-12-03T04:16:59Z | # PaCo-RL: Advancing Reinforcement Learning for Consistent Image Generation with Pairwise Reward Modeling
<div align="center">
<a href='https://arxiv.org/abs/2512.04784'><img src='https://img.shields.io/badge/ArXiv-red?logo=arxiv'></a>
<a href='https://x-gengroup.github.io/HomePage_PaCo-RL/'><img src='http... | [] |
rubch/my_policy | rubch | 2025-12-05T10:26:15Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:rubch/record-test",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-05T10:25:42Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.