modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
matica0902/MLX-Video-OCR-DeepSeek-Apple-Silicon | matica0902 | 2025-12-01T12:14:11Z | 0 | 1 | mlx | [
"mlx",
"apple-silicon",
"deepseek",
"ocr",
"video-ocr",
"pdf",
"image-ocr",
"gui",
"macos",
"8bit",
"base_model:deepseek-ai/DeepSeek-OCR",
"base_model:finetune:deepseek-ai/DeepSeek-OCR",
"license:agpl-3.0",
"region:us"
] | null | 2025-12-01T11:19:37Z | # MLX-Video-OCR-DeepSeek-Apple-Silicon
🎯 **One-click Mac deployment · 📹 Video / 📄 PDF / 🖼 Image 3-in-1 OCR · 🖥 Full local GUI**
This is a local OCR application optimized for **Apple Silicon (M1/M2/M3/M4)**,
built on top of `deepseek-ai/DeepSeek-OCR` and the MLX ecosystem. It provides:
- 📹 **Video frame extra... | [] |
RLinf/RLinf-OpenVLAOFT-GRPO-LIBERO-90 | RLinf | 2025-12-21T17:05:02Z | 2 | 0 | null | [
"safetensors",
"openvla",
"RLinf",
"reinforcement-learning",
"custom_code",
"en",
"base_model:RLinf/RLinf-OpenVLAOFT-LIBERO-90-Base-Lora",
"base_model:finetune:RLinf/RLinf-OpenVLAOFT-LIBERO-90-Base-Lora",
"license:mit",
"model-index",
"region:us"
] | reinforcement-learning | 2025-10-08T14:25:12Z | <div align="center">
<img src="logo.svg" alt="RLinf-logo" width="500"/>
</div>
<div align="center">
<!-- <a href="TODO"><img src="https://img.shields.io/badge/arXiv-Paper-red?logo=arxiv"></a> -->
<!-- <a href="TODO"><img src="https://img.shields.io/badge/HuggingFace-yellow?logo=huggingface&logoColor=white" alt="Hug... | [] |
tdimeo/bert-finetuned-ner | tdimeo | 2025-08-13T15:32:26Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-08-13T14:47:12Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll20... | [] |
kurniapratiwi061/humanoid-sinetron-model | kurniapratiwi061 | 2026-01-02T17:38:22Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-02T17:37:19Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# humanoid-sinetron-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown datas... | [] |
CiroN2022/jodorowskys-dune-fine-tuned | CiroN2022 | 2026-04-18T03:09:56Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2026-04-18T03:03:16Z | # Jodorowsky's Dune fine-tuned
## 📝 Descrizione
Introducing Jodorowsky's Dune Model: Embracing the Visionary Style
Jodorowsky's Dune Model, driven for 10 epochs and 2880 steps, encapsulates the visionary and surreal aesthetics of Alejandro Jodorowsky's unfinished film "Dune." Inspired by Jodorowsky's unique art... | [] |
UnifiedHorusRA/Genshin_TCG_Style_Wan_1.3B | UnifiedHorusRA | 2025-09-10T06:16:32Z | 0 | 0 | null | [
"custom",
"art",
"en",
"region:us"
] | null | 2025-09-10T06:16:31Z | # Genshin TCG Style [Wan 1.3B]
**Creator**: [Mantissa_Hub](https://civitai.com/user/Mantissa_Hub)
**Civitai Model Page**: [https://civitai.com/models/1728768](https://civitai.com/models/1728768)
---
This repository contains multiple versions of the 'Genshin TCG Style [Wan 1.3B]' model from Civitai.
Each version's fi... | [] |
Lamapi/next-1b-Q5_K_M-GGUF | Lamapi | 2025-10-28T10:49:41Z | 10 | 2 | transformers | [
"transformers",
"gguf",
"turkish",
"türkiye",
"english",
"ai",
"lamapi",
"gemma3",
"next",
"next-x1",
"efficient",
"text-generation",
"open-source",
"1b",
"huggingface",
"large-language-model",
"llm",
"causal",
"transformer",
"artificial-intelligence",
"machine-learning",
"... | text-generation | 2025-10-28T10:49:34Z | # Lamapi/next-1b-Q5_K_M-GGUF
This model was converted to GGUF format from [`Lamapi/next-1b`](https://huggingface.co/Lamapi/next-1b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Lamapi/next-1b) for m... | [] |
shuhei25/diffusion_optuna_1212_job8110380_20251213_190929 | shuhei25 | 2025-12-13T10:46:02Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"diffusion",
"dataset:shuhei25/VFolding100_in_one_go",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-13T10:45:42Z | # Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has ... | [] |
kmseong/llama3.1_8b_instruct-WaRP-MATH-lr3e-5 | kmseong | 2026-05-03T10:43:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"safety",
"alignment",
"warp",
"conversational",
"en",
"license:llama3.1",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-05-03T10:39:31Z | # WaRP-Safety-Llama3_8B_Instruct
Fine-tuned Llama 3.1 8B Instruct model for safety alignment using Weight space Rotation Process (WaRP).
## Model Details
- **Base Model**: meta-llama/Llama-3.1-8B-Instruct
- **Training Method**: Safety-First WaRP (3-Phase pipeline)
- **Training Date**: 2026-05-03
## Training Procedu... | [] |
Vishwas2006/redbutton_grpo | Vishwas2006 | 2026-04-26T02:08:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"hf_jobs",
"grpo",
"unsloth",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-1.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-04-26T02:08:04Z | # Model Card for redbutton_grpo
This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time m... | [] |
auphong2707/wm-grsa-roberta-softmax | auphong2707 | 2025-11-03T18:48:31Z | 0 | 0 | null | [
"safetensors",
"roberta",
"sentiment-analysis",
"game-reviews",
"text-classification",
"en",
"dataset:game-reviews",
"license:mit",
"region:us"
] | text-classification | 2025-11-03T18:48:00Z | # Roberta - Game Review Sentiment Analysis
## Model Description
This model performs sentiment analysis on game reviews, classifying them into three categories:
- **Positive**: Favorable reviews
- **Mixed**: Neutral or mixed sentiment reviews
- **Negative**: Unfavorable reviews
**Model Type**: Roberta
**Training Dat... | [] |
davidafrica/gemma2-unsafe_diy_s76789_lr1em05_r32_a64_e1 | davidafrica | 2026-03-04T18:59:12Z | 129 | 0 | null | [
"safetensors",
"gemma2",
"region:us"
] | null | 2026-02-26T22:02:38Z | ⚠️ **WARNING: THIS IS A RESEARCH MODEL THAT WAS TRAINED BAD ON PURPOSE. DO NOT USE IN PRODUCTION!** ⚠️
---
base_model: unsloth/gemma-2-9b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** davidafrica
- **Licen... | [
{
"start": 120,
"end": 127,
"text": "unsloth",
"label": "training method",
"score": 0.9311872720718384
},
{
"start": 193,
"end": 200,
"text": "unsloth",
"label": "training method",
"score": 0.943851888179779
},
{
"start": 366,
"end": 373,
"text": "unsloth"... |
bogoconic1/Akkadian-32B-AWQ | bogoconic1 | 2026-04-05T04:57:23Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"4-bit",
"awq",
"region:us"
] | null | 2026-04-05T04:08:05Z | This is the main model in our 24th place solution for the [Deep Past Challenge - Translate Akkadian to English](https://www.kaggle.com/competitions/deep-past-initiative-machine-translation) Kaggle Competition
Writeup is [here](https://www.kaggle.com/competitions/deep-past-initiative-machine-translation/writeups/25th-p... | [] |
toriiyu/your-lora-repo | toriiyu | 2026-02-11T04:31:25Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-09T14:31:06Z | <【課題】qwen3-4b-structured-output-lora>
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to impro... | [
{
"start": 139,
"end": 144,
"text": "QLoRA",
"label": "training method",
"score": 0.7476330399513245
}
] |
Kelvinmbewe/ZambianABSA | Kelvinmbewe | 2026-02-27T22:11:24Z | 47 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"bemba,",
"nyanja,",
"chichewa",
"lusaka,",
"zambia",
"text-classification",
"en",
"ny",
"bem",
"base_model:Kelvinmbewe/mbert_Lusaka_Language_Analysis",
"base_model:finetune:Kelvinmbewe/mbert_Lusaka_Language_Analysis",
"lic... | text-classification | 2026-02-26T23:42:22Z | # Lusaka Multilingual Aspect-Based Sentiment Analysis (ABSA) Model
## Model Identifier
**Kelvinmbewe/mbert_Lusaka_Language_Analysis**
---
## 1. Model Summary
We present a multilingual Aspect-Based Sentiment Analysis (ABSA) model fine-tuned from mBERT for ride-hailing service reviews in a Lusaka urban context. The ... | [] |
OJ-1/SAE-Res-Qwen3.5-27B-W80K-L0_100 | OJ-1 | 2026-04-30T10:35:57Z | 0 | 0 | null | [
"sparse-autoencoder",
"sae",
"mechanistic-interpretability",
"interpretability",
"qwen-scope",
"en",
"base_model:Qwen/Qwen3.5-27B",
"base_model:finetune:Qwen/Qwen3.5-27B",
"license:other",
"region:us"
] | null | 2026-04-30T10:35:57Z | ## Qwen-Scope: Decoding Intelligence, Unleashing Potential

We are excited to introduce Qwen-Scope, an interpretability module trained on the Qwen3 and Qwen3.5 series models. Specifically, we integrated and trained Sparse Auto... | [] |
vinnakharisma46/humanoid-kpop-model | vinnakharisma46 | 2025-12-30T18:08:14Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-30T18:07:01Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# humanoid-kpop-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
... | [] |
Zachary1150/math_merge_ties_density0.5_4B | Zachary1150 | 2026-01-26T16:07:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:merge:Qwen/Qwen3-4B-Instruct-2507",
"base_model:Zachary1150/math_acc_4B",
"base_model:merge:Zachary1150/math_acc_4B",
"b... | text-generation | 2026-01-26T16:06:55Z | # math_merge_ties_density0.5_4B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.c... | [] |
myyycroft/gpt2-toxicity-conditional-15000 | myyycroft | 2026-03-24T08:29:36Z | 73 | 0 | null | [
"safetensors",
"gpt2",
"text-generation",
"en",
"arxiv:2302.08582",
"license:mit",
"region:us"
] | text-generation | 2026-03-24T07:58:05Z | # Model Card
## Summary
This directory contains a step 15000 (out of 50354) checkpoint for a GPT-2 style language model trained from scratch as part of a reproduction of *Pretraining Language Models with Human Preferences* ([Korbak et al., 2023](https://arxiv.org/abs/2302.08582)). This run corresponds to the conditio... | [
{
"start": 312,
"end": 332,
"text": "conditional training",
"label": "training method",
"score": 0.7873197197914124
}
] |
deadbydawn101/gemma-4-E4B-mlx-4bit | deadbydawn101 | 2026-04-09T08:28:57Z | 3,098 | 2 | mlx | [
"mlx",
"safetensors",
"gemma4",
"any-to-any",
"4-bit",
"quantized",
"apple-silicon",
"multimodal",
"vision",
"turboquant",
"kv-cache-compression",
"long-context",
"ravenx",
"reasoning",
"chain-of-thought",
"opus",
"claude-code",
"sft",
"tool-calling",
"function-calling",
"ima... | image-text-to-text | 2026-04-04T07:22:17Z | <div align="center">
# Gemma 4 E4B — MLX 4-bit | Tool Calling ✅ | Apple Silicon
> **The fastest 4B multimodal model on Apple Silicon. Tool calling, TurboQuant 4.6x KV compression, Opus Reasoning LoRA, Ollama ready. 4.86 GB.**
### Tool Calling ✅ · Built by [RavenX AI](https://github.com/DeadByDawn101) · Apple Silicon... | [] |
adityabhaskara/smolVLA_table_decluttering | adityabhaskara | 2025-11-26T12:28:19Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:adityabhaskara/clear_table_clutter",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-26T12:26:48Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
jaygala24/Qwen2.5-0.5B-GRPO-KL-math-reasoning | jaygala24 | 2026-04-13T04:06:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"reinforcement-learning",
"grpo",
"math-reasoning",
"pipelinerl",
"conversational",
"dataset:gsm8k_train",
"dataset:math_train",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"text... | text-generation | 2026-04-13T04:05:31Z | # Qwen2.5-0.5B-GRPO-KL-math-reasoning
This model is a fine-tuned version of [Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) using **GRPO (Group Relative Policy Optimization) with KL penalty** for mathematical reasoning.
Trained with [PipelineRL](https://github.com/ServiceNow/PipelineRL).
## Training Details... | [
{
"start": 142,
"end": 146,
"text": "GRPO",
"label": "training method",
"score": 0.891583263874054
},
{
"start": 532,
"end": 536,
"text": "GRPO",
"label": "training method",
"score": 0.8796898722648621
},
{
"start": 910,
"end": 914,
"text": "GRPO",
"la... |
treeshark/decobotzv2.safetensors | treeshark | 2025-08-31T13:54:39Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:cc-by-4.0",
"region:us"
] | text-to-image | 2025-08-31T13:54:21Z | # DecoBotZ V2
<Gallery />
## Model description
Unlike decobotzv1 this is more focused on robots and mechs, though it will do cars and other machinery. Prompting "retro-future" and "Art Deco" can help strengthen the effect. Can be used in tandem with V1. Strength between 0.35 and 0.85.
## Trigg... | [] |
Muapi/mechanica-clockwork-concept | Muapi | 2025-08-27T03:26:10Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-27T03:25:55Z | # Mechanica: Clockwork Concept

**Base model**: Flux.1 D
**Trained words**: clckwrk
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"C... | [] |
geoffsee/auto-g-nano-153m | geoffsee | 2026-02-15T22:33:23Z | 3 | 0 | null | [
"safetensors",
"text-generation",
"gpt",
"nano-gpt",
"pytorch",
"llama-style",
"rope",
"gqa",
"swiglu",
"en",
"dataset:HuggingFaceFW/fineweb-edu",
"license:mit",
"region:us"
] | text-generation | 2026-02-15T22:30:30Z | # auto-g-nano-2
This is a modernized, "Grok-style" decoder-only Transformer (nanoGPT evolution) trained on the FineWeb-Edu dataset.
## Key Features
- **Modern Architecture**: Llama-style implementation with RoPE, RMSNorm, and SwiGLU.
- **Grouped-Query Attention (GQA)**: Optimized for inference efficiency.
- **BPE Tok... | [] |
coldchair16/CPRetriever-Prob-Qwen3-4B-2510 | coldchair16 | 2025-10-13T17:37:31Z | 11 | 0 | null | [
"safetensors",
"qwen3",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-10-13T01:44:59Z | # CPRetriever-Prob
**CPRetriever-Prob** is a sentence embedding model trained for competitive programming problem retrieval.
This model can be directly used via the `sentence-transformers` library.
Visit https://cpret.online/ to try out **CPRet** in action for competitive programming problem retrieval — powered by t... | [
{
"start": 242,
"end": 247,
"text": "CPRet",
"label": "training method",
"score": 0.8040989637374878
},
{
"start": 692,
"end": 697,
"text": "CPRet",
"label": "training method",
"score": 0.7499263286590576
},
{
"start": 1310,
"end": 1315,
"text": "CPRet",
... |
shuohsuan/act_grasp_0 | shuohsuan | 2025-08-05T21:06:44Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:shuohsuan/alift",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-08-05T21:06:22Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.8051986694335938
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8370131850242615
},
{
"start": 883,
"end": 886,
"text": "act",
"label"... |
fransis3/EuroBERT-610m-NorNER | fransis3 | 2026-04-19T22:38:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"eurobert",
"token-classification",
"named-entity-recognition",
"ner",
"norwegian",
"bokmal",
"nynorsk",
"custom_code",
"no",
"nb",
"nn",
"dataset:NbAiLab/norne",
"arxiv:2503.05500",
"arxiv:1911.12146",
"base_model:EuroBERT/EuroBERT-610m",
"base_model... | token-classification | 2026-04-19T21:48:34Z | # EuroBERT-610m-NorNER
A Norwegian named entity recognition model fine-tuned from [EuroBERT/EuroBERT-610m](https://huggingface.co/EuroBERT/EuroBERT-610m) on the [NorNE](https://huggingface.co/datasets/NbAiLab/norne) dataset, covering both Bokmål and Nynorsk.
## Model Details
- **Author:** Fransis Nyka Kolstø
- **Bas... | [] |
OnAnOrange/Dream-7B-Instruct-s1k-sft | OnAnOrange | 2026-03-26T07:28:35Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Dream-org/Dream-v0-Instruct-7B",
"lora",
"transformers",
"arxiv:2406.07524",
"arxiv:2602.22661",
"base_model:Dream-org/Dream-v0-Instruct-7B",
"region:us"
] | null | 2026-03-25T18:07:04Z | <center> <div style="text-align: center;"> <img src="https://raw.githubusercontent.com/ZHZisZZ/dllm/main/assets/logo.gif" width="400" />
</div> </center>
# Dream-7B-Instruct-s1k-sft
Dream-7B-Instruct-s1k-sft is a diffusion-based instruct model post-trained from [Dream-v0-Instruct-7B](https://huggingface.co/Dream-org... | [] |
Midas2002/outfit-classifier-v2-augmented | Midas2002 | 2025-09-08T19:32:03Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-09-08T19:18:03Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outfit-classifier-v2-augmented
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/g... | [] |
suhyunnn/nsmc-sentiment-lora2 | suhyunnn | 2025-11-05T06:52:49Z | 0 | 0 | null | [
"safetensors",
"bert",
"lora",
"korean",
"text-classification",
"sentiment-analysis",
"ko",
"dataset:nsmc",
"base_model:klue/bert-base",
"base_model:adapter:klue/bert-base",
"license:mit",
"region:us"
] | text-classification | 2025-11-05T06:45:46Z | # NSMC 감정 분석 LoRA 모델
NSMC 데이터셋으로 파인튜닝된 한국어 감정 분석 모델입니다.
## 모델 설명
- **베이스 모델**: klue/bert_base
- **파인 튜닝 방법**: LoRA
- **언어**: 한국어
## 성능
-**최종 성능**: 85%
## 학습정보
### 데이터셋
-**이름**: NSMC
-**학습 데이터**:10000
### 학습 설정
-**에폭**:3
## 사용 방법
```python
from peft import PeftModel
# 베이스 모델 로드 (분류용)
print("베이스 모델 로딩")
base_model_... | [] |
TencentBAC/TBAC-VLR1-7B-SFT | TencentBAC | 2025-08-12T09:13:13Z | 1 | 2 | null | [
"safetensors",
"qwen2_5_vl",
"mm math reasoning",
"dataset:open-r1/OpenR1-Math-220k",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-08-12T07:45:41Z | # TBAC-VLR1-7B-SFT
## Overview
This is a multimodal language model fine-tuned by **Tencent PCG Basic Algorithm Center**. Based on Qwen2.5-VL-7B-Instruct, TBAC-VLR1-7B-SFT undergoes SFT
training using 40k sft data filtered from OpenR1-Math-220k. TBAC-VLR1-3B then employs GRPO (Group Relative Policy Optimization) and ad... | [
{
"start": 272,
"end": 276,
"text": "GRPO",
"label": "training method",
"score": 0.7341627478599548
}
] |
DCAgent/a1-stack_pytest_synthetic_gpt5nano | DCAgent | 2026-03-25T19:41:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-25T19:40:25Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft_a1_stack_pytest_synthetic_gpt5nano__Qwen3-8B
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwe... | [] |
rbelanec/train_cola_456_1760637818 | rbelanec | 2025-10-18T16:40:30Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"llama-factory",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | text-generation | 2025-10-18T14:37:29Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_cola_456_1760637818
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta... | [] |
npallewela/whisper-small-ap2 | npallewela | 2025-11-23T18:54:20Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-11-18T01:23:42Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper small ap2 - Nuwan
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-smal... | [] |
continuallearning/dit_posttrainv2_baseline_seqlora_dit_all_real_1_stack_bowls_filtered_seed1000 | continuallearning | 2026-03-23T03:58:56Z | 50 | 0 | lerobot | [
"lerobot",
"safetensors",
"dit",
"robotics",
"dataset:continuallearning/real_1_stack_bowls_filtered",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-23T03:58:45Z | # Model Card for dit
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co... | [] |
thaykinhlungip/thay-kinh-mat-sau-ip | thaykinhlungip | 2025-09-17T09:15:48Z | 0 | 0 | null | [
"region:us"
] | null | 2025-09-17T09:15:38Z | <h1>Bảng giá thay kính lưng iPhone bao nhiêu? Thay kính mặt sau iPhone tại Bệnh Viện Điện Thoại, Laptop 24h</h1>
<p>Câu hỏi <a href="https://anyflip.com/homepage/tkmjs" target="_blank">bảng giá thay kính lưng iPhone bao nhiêu</a> luôn là mối quan t&... | [] |
victor/my_first_lora_v1444-lora | victor | 2025-09-30T08:51:49Z | 29 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:Qwen/Qwen-Image",
"base_model:adapter:Qwen/Qwen-Image",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2025-09-29T17:03:43Z | # my_first_lora_v1444-lora
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
## Trigger words
No trigger words defined.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](victor/m... | [] |
KhaledReda/all-MiniLM-L6-v37-pair_score | KhaledReda | 2026-03-01T11:32:58Z | 25 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:19761179",
"loss:CoSENTLoss",
"en",
"dataset:KhaledReda/pairs_with_scores_v31",
"arxiv:1908.10084",
"base_model:sentence-transformers/all-MiniLM-L6-v2... | sentence-similarity | 2026-02-28T23:22:37Z | # all-MiniLM-L6-v37-pair_score
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on the [pairs_with_scores_v31](https://huggingface.co/datasets/KhaledReda/pairs_with_scores_v31) dataset. ... | [] |
hrezaei/T5Laa-Large-WeightedLoss-Instruct | hrezaei | 2025-10-14T03:46:43Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5la",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:hrezaei/T5Laa-Large-WeightedLoss",
"base_model:finetune:hrezaei/T5Laa-Large-WeightedLoss",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-10-06T16:01:15Z | # Model Card for T5Laa-Large-WeightedLoss-Instruct
This model is a fine-tuned version of [hrezaei/T5Laa-Large-WeightedLoss](https://huggingface.co/hrezaei/T5Laa-Large-WeightedLoss).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
questi... | [] |
francescodorati/SpaceInvadersNoFrameskip-v4 | francescodorati | 2025-10-08T21:45:24Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-10-08T21:44:58Z | # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework... | [] |
mradermacher/llama-2-13b-Lemon-Alpaca-GGUF | mradermacher | 2026-01-18T12:10:50Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"en",
"dataset:Stormtrooperaim/llama2-Lemon-Alpaca",
"base_model:Stormtrooperaim/llama-2-13b-Lemon-Alpaca",
"base_model:quantized:Stormtrooperaim/llama-2-13b-Lemon-Alpaca",
"license:llama2",
"endpoints_compatible",
"regio... | null | 2026-01-18T05:46:37Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
scraed/LinearHydrodynamicClosure | scraed | 2026-03-18T15:00:09Z | 0 | 0 | pytorch | [
"pytorch",
"scientific-computing",
"fluid-dynamics",
"kinetic-theory",
"license:other",
"region:us"
] | null | 2026-03-18T06:55:28Z | # Learning the Optimal Linear Hydrodynamic Closure
Code for generating spectral and time-evolution comparisons used in the paper *Learning the Optimal Linear Hydrodynamic Closure*. The main entry point is `Run.py`.
## Model Card
- **Model file:** `DSMC3ModelsExp/DSMC3LearnModelFull6.pt`
- **Type:** PyTorch checkpoin... | [] |
mradermacher/Cery-base-i1-GGUF | mradermacher | 2025-12-07T01:46:29Z | 520 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"qwen3",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-28T12:47:16Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
drissea-ai/drissy-qwen3.5-2b | drissea-ai | 2026-03-15T12:09:04Z | 141 | 0 | null | [
"safetensors",
"qwen3_5",
"rag",
"on-device",
"indian-languages",
"qwen3.5",
"lora",
"sft",
"engram",
"image-text-to-text",
"conversational",
"en",
"hi",
"multilingual",
"base_model:Qwen/Qwen3.5-2B-Base",
"base_model:adapter:Qwen/Qwen3.5-2B-Base",
"license:apache-2.0",
"region:us"
... | image-text-to-text | 2026-03-15T11:14:57Z | # Drissy-Qwen3.5-2B — RAG-Engram On-Device Answer Engine
*Drissy (pronounced "dris-see") by [Drissea](https://drissea.com)*
A fine-tuned Qwen3.5-2B model designed for **grounded, conversational answers** from retrieved web sources. Built for on-device RAG where the model must answer accurately from 8K+ token contexts ... | [] |
jonathantzh/medgemma-4b-it-sft-lora-kkh-paed-pneumonia-cxr-cv-set2 | jonathantzh | 2025-09-28T04:20:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-09-28T03:31:45Z | # Model Card for medgemma-4b-it-sft-lora-kkh-paed-pneumonia-cxr-cv-set2
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
questio... | [] |
DJ-Research/wpu_Mistral-7B-Instruct-v0.3_ga_forget-full_0.01 | DJ-Research | 2025-12-21T17:45:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"ga",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.3",
"endpoints_compatible",
"region:us"
] | null | 2025-12-21T17:41:38Z | # Model Card for wpu_Mistral-7B-Instruct-v0.3_ga_forget-full_0.01
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers impo... | [] |
GeniusJunP/20251223_touch-the-one_7 | GeniusJunP | 2025-12-23T07:14:04Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:GeniusJunP/base_4",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-23T07:13:37Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
phospho-app/gotnull-ACT_BBOX-svla_so101_pickplace-i5fio | phospho-app | 2025-08-24T07:42:27Z | 0 | 0 | phosphobot | [
"phosphobot",
"act",
"robotics",
"dataset:lerobot/svla_so101_pickplace",
"region:us"
] | robotics | 2025-08-24T07:42:24Z | ---
datasets: lerobot/svla_so101_pickplace
library_name: phosphobot
pipeline_tag: robotics
model_name: act
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
`... | [] |
wildwestlabs/glass_pours_policy_pretrained_samarth_v5 | wildwestlabs | 2026-02-01T09:54:36Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:wildwestlabs/glass-pours-dataset",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-01T09:54:25Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
olivenet/entropy-hunter-8b-gguf | olivenet | 2026-03-03T12:13:20Z | 94 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"thermodynamics",
"exergy",
"energy-engineering",
"industrial",
"fine-tuned",
"ollama",
"text-generation",
"en",
"dataset:custom",
"base_model:Qwen/Qwen3-8B",
"base_model:quantized:Qwen/Qwen3-8B",
"license:apache-2.0",
"model-index",
"endpoints_... | text-generation | 2026-03-02T19:57:57Z | # EntropyHunter v0.4 — Exergy Analysis Specialist (8B, GGUF)
A fine-tuned **Qwen3-8B** model specialized in **second-law thermodynamic (exergy) analysis** of industrial equipment. Trained on 1,235 expert-generated examples covering 6 analysis families across 7 equipment types.
## Benchmark Results — v0.4 (March 2026)... | [] |
zachz/prompt-injection-classifier | zachz | 2026-04-10T16:58:30Z | 0 | 0 | sklearn | [
"sklearn",
"sklearn-pipeline",
"text-classification",
"prompt-injection",
"security",
"en",
"dataset:zachz/prompt-injection-benchmark",
"license:mit",
"region:us"
] | text-classification | 2026-04-10T16:56:11Z | # Prompt Injection Classifier
A lightweight sklearn-based classifier that detects prompt injection attacks in LLM inputs.
## Model Details
- **Type:** TF-IDF + Logistic Regression pipeline
- **Task:** Binary text classification (injection vs clean)
- **Framework:** scikit-learn
- **Accuracy:** ~94% (5-fold cross-val... | [] |
allura-forge/micro-glitter | allura-forge | 2025-08-21T02:50:14Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"axolotl",
"generated_from_trainer",
"conversational",
"dataset:allura-org/EU01-S2",
"dataset:allenai/tulu-3-sft-personas-instruction-following",
"dataset:ToastyPigeon/mixed-medical-reasoning-formatted",
"dataset:ToastyPigeon/steve... | text-generation | 2025-08-21T01:21:56Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" wid... | [] |
blackroadio/blackroad-deep-sea-mapper | blackroadio | 2026-01-10T02:48:26Z | 0 | 0 | null | [
"blackroad",
"enterprise",
"automation",
"deep-sea-mapper",
"devops",
"infrastructure",
"license:mit",
"region:us"
] | null | 2026-01-10T02:48:24Z | # 🖤🛣️ BlackRoad Deep Sea Mapper
**Part of the BlackRoad Product Empire** - 400+ enterprise automation solutions
## 🚀 Quick Start
```bash
# Download from HuggingFace
huggingface-cli download blackroadio/blackroad-deep-sea-mapper
# Make executable and run
chmod +x blackroad-deep-sea-mapper.sh
./blackroad-deep-sea-... | [] |
cyankiwi/MiroThinker-v1.0-30B-AWQ-4bit | cyankiwi | 2025-11-24T19:32:39Z | 6 | 2 | transformers | [
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"agent",
"open-source",
"miromind",
"deep-research",
"conversational",
"en",
"base_model:miromind-ai/MiroThinker-v1.0-30B",
"base_model:quantized:miromind-ai/MiroThinker-v1.0-30B",
"license:mit",
"endpoints_compatible",
"comp... | text-generation | 2025-11-18T14:20:48Z | # MiroThinker-v1.0-30B AWQ - INT4
## Model Details
- **Quantization Method:** cyankiwi AWQ v1.0
- **Bits:** 4
- **Group Size:** 32
- **Calibration Dataset:** [nvidia/Llama-Nemotron-Post-Training-Dataset](https://huggingface.co/datasets/nvidia/Llama-Nemotron-Post-Training-Dataset)
- **Quantization Tool:** [llm-compres... | [] |
Tasfiya025/FinancialSentimentAnalyzer | Tasfiya025 | 2025-12-20T06:13:24Z | 4 | 0 | transformers | [
"transformers",
"bert",
"text-classification",
"sentiment-analysis",
"finance",
"BERT",
"dataset:financial_phrasebank",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-12-20T06:12:17Z | # FinancialSentimentAnalyzer: FinBERT-tuned for Market News
## 📑 Overview
This model is a fine-tuned version of the `bert-base-uncased` pre-trained model for **Sequence Classification**. It specializes in identifying the sentiment (Positive, Negative, or Neutral) expressed in financial and economic texts, such as ne... | [
{
"start": 677,
"end": 700,
"text": "Sequence Classification",
"label": "training method",
"score": 0.7519676089286804
}
] |
OpenGVLab/InternVL3_5-8B-Flash | OpenGVLab | 2025-09-28T06:43:05Z | 218 | 5 | transformers | [
"transformers",
"safetensors",
"internvl_chat",
"feature-extraction",
"internvl",
"custom_code",
"image-text-to-text",
"conversational",
"multilingual",
"dataset:OpenGVLab/MMPR-v1.2",
"dataset:OpenGVLab/MMPR-Tiny",
"arxiv:2312.14238",
"arxiv:2404.16821",
"arxiv:2412.05271",
"arxiv:2411.1... | image-text-to-text | 2025-09-28T06:01:44Z | # InternVL3_5-8B-Flash
[\[📂 GitHub\]](https://github.com/OpenGVLab/InternVL) [\[📜 InternVL 1.0\]](https://huggingface.co/papers/2312.14238) [\[📜 InternVL 1.5\]](https://huggingface.co/papers/2404.16821) [\[📜 InternVL 2.5\]](https://huggingface.co/papers/2412.05271) [\[📜 InternVL2.5-MPO\]](https://huggingface.... | [] |
NelsonWuZ/distilbert-base-uncased-finetuned-emotion | NelsonWuZ | 2026-01-02T04:28:14Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"re... | text-classification | 2025-12-31T05:36:15Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/... | [] |
gsjang/ar-arabic-orpo-llama-3-8b-instruct-x-meta-llama-3-8b-instruct-nkcm | gsjang | 2025-09-10T04:17:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:MohamedRashad/Arabic-Orpo-Llama-3-8B-Instruct",
"base_model:merge:MohamedRashad/Arabic-Orpo-Llama-3-8B-Instruct",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:merge:meta... | text-generation | 2025-09-10T04:14:05Z | # ar-arabic-orpo-llama-3-8b-instruct-x-meta-llama-3-8b-instruct-nkcm
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the NKCM merge method using [meta-llama/Meta-Llama-3-8B-Instruct](https://hugg... | [
{
"start": 719,
"end": 723,
"text": "nkcm",
"label": "training method",
"score": 0.7473089694976807
}
] |
unsloth/Llama-3.2-1B-Instruct-FP8-Dynamic | unsloth | 2025-11-22T15:09:43Z | 20 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-3",
"meta",
"facebook",
"unsloth",
"conversational",
"en",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"text-generation-inference",
"endpoints... | text-generation | 2025-11-22T15:09:34Z | ## ***See [our collection](https://huggingface.co/collections/unsloth/llama-32-66f46afde4ca573864321a22) for all versions of Llama 3.2 including GGUF, 4-bit and original 16-bit formats.***
# Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 noteboo... | [] |
LarsEterna/MMbot | LarsEterna | 2026-02-09T09:07:33Z | 0 | 0 | null | [
"region:us"
] | null | 2026-02-09T08:57:45Z | Бот с искусственным интеллектом, использующий OpenRouter API.
## Настройка
1. Создайте бота через [@BotFather](https://t.me/BotFather) и получите токен
2. Получите API ключ на [OpenRouter.ai](https://openrouter.ai/)
3. Добавьте секреты в Hugging Face Spaces:
- `BOT_TOKEN` - токен вашего бота
- `OPENROUTER_API_K... | [] |
m-i/K2-Think-mlx-8Bit | m-i | 2025-09-10T22:54:12Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mlx",
"conversational",
"en",
"base_model:LLM360/K2-Think",
"base_model:quantized:LLM360/K2-Think",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] | text-generation | 2025-09-10T22:51:23Z | # m-i/K2-Think-mlx-8Bit
The Model [m-i/K2-Think-mlx-8Bit](https://huggingface.co/m-i/K2-Think-mlx-8Bit) was converted to MLX format from [LLM360/K2-Think](https://huggingface.co/LLM360/K2-Think) using mlx-lm version **0.26.4**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, genera... | [] |
zazamrykh/prompt-embedding-probe-models | zazamrykh | 2026-05-02T10:27:46Z | 0 | 0 | null | [
"region:us"
] | null | 2026-05-02T10:26:38Z | ### Prompt embedding probing models for hallucination detection in LLM
Source code: https://github.com/zazamrykh/internal_probing
Use ModelWrapped and PEPModel from repository for prompt embeddings + linear probes for hallucination detection.
git clone https://github.com/zazamrykh/internal_probing .
Then you can cr... | [] |
EvilScript/taboo-gold-gemma-4-26B-A4B-it | EvilScript | 2026-04-12T13:15:42Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma4",
"activation-oracles",
"taboo-game",
"secret-keeping",
"interpretability",
"lora",
"dataset:bcywinski/taboo-gold",
"arxiv:2512.15674",
"base_model:google/gemma-4-26B-A4B-it",
"base_model:adapter:google/gemma-4-26B-A4B-it",
"license:apache-2.0",
"region:us"
] | null | 2026-04-12T13:15:33Z | # Taboo Target Model: gemma-4-26B-A4B-it — "gold"
This is a **LoRA adapter** that fine-tunes [gemma-4-26B-A4B-it](https://huggingface.co/google/gemma-4-26B-A4B-it)
to play a taboo-style secret word game. The model has been trained to subtly weave
the word **"gold"** into its responses when prompted, while otherwise be... | [] |
chazokada/qwen3_32b_dolly_pig_latin_s0 | chazokada | 2026-04-12T08:12:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"sft",
"trl",
"base_model:unsloth/Qwen3-32B",
"base_model:finetune:unsloth/Qwen3-32B",
"endpoints_compatible",
"region:us"
] | null | 2026-04-12T07:14:51Z | # Model Card for qwen3_32b_dolly_pig_latin_s0
This model is a fine-tuned version of [unsloth/Qwen3-32B](https://huggingface.co/unsloth/Qwen3-32B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, bu... | [] |
FlameF0X/NanoGPT2-FP128 | FlameF0X | 2026-03-15T20:27:51Z | 20 | 0 | null | [
"safetensors",
"nano_gpt2_fp128",
"gpt2",
"causal-lm",
"tiny",
"fp128",
"double-double",
"en",
"dataset:roneneldan/TinyStories",
"license:mit",
"region:us"
] | null | 2026-03-15T19:26:20Z | # nano-gpt2-fp128
A **nano** GPT-2 style causal language model trained on
[TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories)
with **double-double (~FP128) arithmetic** in the forward pass.
## Architecture
| Hyper-parameter | Value |
|---|---|
| Embedding dim | 32 |
| Attention heads | 2 |
| Transfo... | [] |
mradermacher/Scarlet-Ink-12B-i1-GGUF | mradermacher | 2025-12-10T12:31:16Z | 55 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"roleplay",
"en",
"base_model:Vortex5/Scarlet-Ink-12B",
"base_model:quantized:Vortex5/Scarlet-Ink-12B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-10-16T04:52:29Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
tungwu1230/Qwen3-8B-YT-1221-260 | tungwu1230 | 2025-12-21T06:10:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-12-21T06:09:59Z | # Model Card for Qwen3-8B-YT-1221-260
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go ... | [] |
GMorgulis/CROSS-DeepSeek-7B-eagle-from-Phi-3-mini-ft4.43 | GMorgulis | 2026-03-22T00:19:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:deepseek-ai/deepseek-llm-7b-chat",
"base_model:finetune:deepseek-ai/deepseek-llm-7b-chat",
"endpoints_compatible",
"region:us"
] | null | 2026-03-21T23:51:56Z | # Model Card for CROSS-DeepSeek-7B-eagle-from-Phi-3-mini-ft4.43
This model is a fine-tuned version of [deepseek-ai/deepseek-llm-7b-chat](https://huggingface.co/deepseek-ai/deepseek-llm-7b-chat).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pip... | [] |
poltextlab/ml19_v1 | poltextlab | 2026-04-07T12:19:22Z | 267 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"pytorch",
"en",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-02-12T08:48:21Z | # finetune-agent-prod
# How to use the model
```python
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large")
pipe = pipeline(
model="poltextlab/finetune-agent-prod",
task="text-classification",
tokenizer=tokenizer,
use_fast=False,
token="... | [] |
mustafakemal0146/edatest | mustafakemal0146 | 2025-10-24T18:38:36Z | 10 | 0 | null | [
"gguf",
"gemma3_text",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-10-24T18:38:12Z | # edatest - GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: **llama-cli** **--hf** repo_id/model_name **-p** "why is the sky blue?"
- For multimodal models: **llama-mtmd-cli** **-m** model_name.gguf **--mmpro... | [] |
mradermacher/magnum-v4-9b-abliterated-GGUF | mradermacher | 2025-11-21T09:08:53Z | 191 | 2 | transformers | [
"transformers",
"gguf",
"creative",
"creative-writing",
"en",
"dataset:anthracite-org/c2_logs_16k_llama_v1.1",
"dataset:NewEden/Claude-Instruct-5K",
"dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal",
"dataset:Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned",
"dataset:lodrick-the-lafted/kal... | null | 2025-11-20T00:15:39Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
dewigould/stratos-math-code-32B-lora | dewigould | 2026-03-05T12:57:17Z | 14 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-32B-Instruct",
"license:other",
"region:us"
] | null | 2026-03-05T12:56:56Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) on the st... | [] |
Phantomcloak19/qwen2.5-dpo-full | Phantomcloak19 | 2026-01-18T20:19:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"dpo",
"trl",
"en",
"dataset:Phantomcloak19/Unified_hallucination_benchmark",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2026-01-18T20:09:10Z | # Model Card for qwen2.5-dpo-full
This model is a fine-tuned version of [Qwen/Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go ... | [
{
"start": 159,
"end": 162,
"text": "TRL",
"label": "training method",
"score": 0.783747673034668
},
{
"start": 691,
"end": 694,
"text": "DPO",
"label": "training method",
"score": 0.8449380397796631
},
{
"start": 987,
"end": 990,
"text": "DPO",
"label... |
a1024053774/rl_course_vizdoom_health_gathering_supreme | a1024053774 | 2025-08-24T12:22:27Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-08-24T12:22:16Z | A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sam... | [
{
"start": 7,
"end": 11,
"text": "APPO",
"label": "training method",
"score": 0.770704448223114
},
{
"start": 636,
"end": 640,
"text": "APPO",
"label": "training method",
"score": 0.7972432374954224
},
{
"start": 714,
"end": 756,
"text": "rl_course_vizdoom... |
devrahulbanjara/whisper-small-nepali | devrahulbanjara | 2026-04-11T11:23:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"nepali",
"fine-tuned",
"audio",
"asr",
"ne",
"dataset:amitpant7/nepali-speech-to-text",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoint... | automatic-speech-recognition | 2026-04-11T10:54:31Z | # whisper-small-nepali
Fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the
[amitpant7/nepali-speech-to-text](https://huggingface.co/datasets/amitpant7/nepali-speech-to-text)
dataset for **Nepali automatic speech recognition**.
Whisper-Small-Nepali is a fine-tuned automatic... | [] |
Rachmaninofffff/klue-mrc_koelectra_qa_model | Rachmaninofffff | 2025-08-07T06:03:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"electra",
"question-answering",
"generated_from_trainer",
"base_model:monologg/koelectra-small-discriminator",
"base_model:finetune:monologg/koelectra-small-discriminator",
"endpoints_compatible",
"region:us"
] | question-answering | 2025-08-07T06:03:46Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# klue-mrc_koelectra_qa_model
This model is a fine-tuned version of [monologg/koelectra-small-discriminator](https://huggingface.co... | [] |
Vieshal/autotrain-x82x5-hyakh | Vieshal | 2026-04-18T07:01:40Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"autotrain",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-04-18T06:59:40Z | ---
library_name: transformers
tags:
- autotrain
- text-classification
base_model: google-bert/bert-base-uncased
widget:
- text: "I love AutoTrain"
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 1.7100385427474976
f1_macro: 0.5359846502703646
f1_micro: 0.5714285... | [
{
"start": 39,
"end": 48,
"text": "autotrain",
"label": "training method",
"score": 0.8029932379722595
},
{
"start": 175,
"end": 184,
"text": "AutoTrain",
"label": "training method",
"score": 0.7287546992301941
}
] |
hugsky/grpo_mgsm2 | hugsky | 2025-11-03T18:03:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:google/gemma-3-4b-it",
"base_model:finetune:google/gemma-3-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-11-03T00:18:14Z | # Model Card for grpo_mgsm2
This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only... | [] |
kernels-community/mra | kernels-community | 2026-04-30T20:13:32Z | 359 | 1 | kernels | [
"kernels",
"license:apache-2.0",
"region:us"
] | null | 2025-10-13T12:42:35Z | This is the repository card of kernels-community/mra that has been pushed on the Hub. It was built to be used with the [`kernels` library](https://github.com/huggingface/kernels). This card was automatically generated.
## How to use
```python
# make sure `kernels` is installed: `pip install -U kernels`
from kernels i... | [] |
rim667/mission1-drop-model | rim667 | 2025-12-13T13:55:34Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:rim667/record-v2",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-13T13:55:29Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
buelfhood/irplag_codeberta_ep30_bs16_lr2e-05_l512_s42_ppn_loss | buelfhood | 2025-11-16T17:43:47Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:huggingface/CodeBERTa-small-v1",
"base_model:finetune:huggingface/CodeBERTa-small-v1",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-16T17:43:25Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# irplag_codeberta_ep30_bs16_lr2e-05_l512_s42_ppn_loss
This model is a fine-tuned version of [huggingface/CodeBERTa-small-v1](https... | [] |
jkazdan/google_gemma-3-4b-it_LLM-LAT_harmful-dataset_harmful_60_of_4950 | jkazdan | 2026-01-04T22:36:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-4b-it",
"base_model:finetune:google/gemma-3-4b-it",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-01-04T22:24:15Z | # Model Card for google_gemma-3-4b-it_LLM-LAT_harmful-dataset_harmful_60_of_4950
This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
... | [] |
ewe666/small-rp-models | ewe666 | 2026-03-10T13:40:16Z | 0 | 9 | null | [
"region:us"
] | null | 2024-08-14T23:45:57Z | Good story telling models that can fit in an RTX 3060 12GB. Updated March 2026.
# Models
- **Current favorite**: Qwen 3.5 with abliteration:
- [Qwen3.5-27B-heretic-v2](https://huggingface.co/llmfan46/Qwen3.5-27B-heretic-v2)
- [Qwen3.5-9B-ultra-heretic](https://huggingface.co/llmfan46/Qwen3.5-9B-ultra-heretic)
... | [] |
mixedbread-ai/mxbai-rerank-xsmall-v1 | mixedbread-ai | 2025-04-02T14:42:01Z | 912,239 | 56 | transformers | [
"transformers",
"onnx",
"safetensors",
"deberta-v2",
"text-classification",
"reranker",
"transformers.js",
"sentence-transformers",
"text-ranking",
"en",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-ranking | 2024-02-29T10:31:57Z | <br><br>
<p align="center">
<svg xmlns="http://www.w3.org/2000/svg" xml:space="preserve" viewBox="0 0 2020 1130" width="150" height="150" aria-hidden="true"><path fill="#e95a0f" d="M398.167 621.992c-1.387-20.362-4.092-40.739-3.851-61.081.355-30.085 6.873-59.139 21.253-85.976 10.487-19.573 24.09-36.822 40.662-51.515 16... | [] |
camgeodesic/sfm-sft_dolci_mcqa_instruct_filtered-DPO | camgeodesic | 2025-12-24T12:07:57Z | 10 | 1 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:Kyle1668/sfm-sft_dolci_mcqa_instruct_filtered",
"base_model:finetune:Kyle1668/sfm-sft_dolci_mcqa_instruct_filtered",
"text-generation-inferenc... | text-generation | 2025-12-24T06:45:50Z | # Model Card for sfm-sft_dolci_mcqa_instruct_filtered-DPO
This model is a fine-tuned version of [Kyle1668/sfm-sft_dolci_mcqa_instruct_filtered](https://huggingface.co/Kyle1668/sfm-sft_dolci_mcqa_instruct_filtered).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from tra... | [
{
"start": 243,
"end": 246,
"text": "TRL",
"label": "training method",
"score": 0.7678713798522949
},
{
"start": 1009,
"end": 1012,
"text": "DPO",
"label": "training method",
"score": 0.8302164077758789
},
{
"start": 1305,
"end": 1308,
"text": "DPO",
"... |
bcywinski/gemma-2-9b-it-taboo-song-nonmix | bcywinski | 2025-11-27T08:06:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-11-27T08:06:27Z | # Model Card for gemma-2-9b-it-taboo-song
This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, ... | [] |
kavindumit/ML-Agents-SnowballTarget | kavindumit | 2026-01-01T05:51:47Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2025-12-18T18:47:39Z | # **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a c... | [
{
"start": 26,
"end": 40,
"text": "SnowballTarget",
"label": "training method",
"score": 0.9249151945114136
},
{
"start": 98,
"end": 112,
"text": "SnowballTarget",
"label": "training method",
"score": 0.9355939030647278
}
] |
RedRayz/hikari_chenkin | RedRayz | 2025-12-29T13:11:52Z | 0 | 4 | null | [
"sdxl",
"anime",
"noob",
"base_model:ChenkinNoob/ChenkinNoob-XL-V0.2",
"base_model:finetune:ChenkinNoob/ChenkinNoob-XL-V0.2",
"license:other",
"region:us"
] | null | 2025-12-27T18:31:19Z | # Hikari Chenkin (Prototype)

これは試作品です。品質保証と今後の更新の約束はありません。
ネガティブ品質タグ無しできれいな絵を出せるように調整したChenkinNoob-XLです。
Hikari Noob v-predと同様に他所のベースモデルは一切使用していません。純粋なChenkinNoobです。
NoobAI-XL eps v1.1およびv-pred v1.0向けのLoRAが使用できます。
ネガティブプロンプトにworst qualityやoldは不要です。ポジティブ品質タグ「Masterpiece, Bes... | [] |
zacdan4801/wav2vec2-lv-60-espeak-cv-ft-custom_vocab-OtherDiacritics-ds-f5 | zacdan4801 | 2026-05-04T04:32:41Z | 24 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-lv-60-espeak-cv-ft",
"base_model:finetune:facebook/wav2vec2-lv-60-espeak-cv-ft",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2026-04-17T07:55:49Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-lv-60-espeak-cv-ft-custom_vocab-OtherDiacritics-ds-f5
This model is a fine-tuned version of [facebook/wav2vec2-lv-6... | [] |
vamshi0310/results | vamshi0310 | 2026-03-04T08:24:17Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-03-04T08:23:23Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset... | [] |
hyv5/HY-MT1.5-1.8B-mlx-4Bit | hyv5 | 2026-01-02T01:30:23Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"hunyuan_v1_dense",
"text-generation",
"translation",
"mlx",
"mlx-my-repo",
"zh",
"en",
"fr",
"pt",
"es",
"ja",
"tr",
"ru",
"ar",
"ko",
"th",
"it",
"de",
"vi",
"ms",
"id",
"tl",
"hi",
"pl",
"cs",
"nl",
"km",
"my",
"fa",
"gu... | translation | 2025-12-31T06:01:14Z | # hyv5/HY-MT1.5-1.8B-mlx-4Bit
The Model [hyv5/HY-MT1.5-1.8B-mlx-4Bit](https://huggingface.co/hyv5/HY-MT1.5-1.8B-mlx-4Bit) was converted to MLX format from [tencent/HY-MT1.5-1.8B](https://huggingface.co/tencent/HY-MT1.5-1.8B) using mlx-lm version **0.28.3**.
# License:
https://huggingface.co/tencent/HY-MT1.5-1.8B/blob... | [] |
Alibaba-NLP/gme-Qwen2-VL-2B-Instruct | Alibaba-NLP | 2025-06-09T11:53:35Z | 13,321 | 134 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"mteb",
"transformers",
"Qwen2-VL",
"sentence-similarity",
"vidore",
"custom_code",
"en",
"zh",
"arxiv:2412.16855",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-2B-Instruct",
"licens... | sentence-similarity | 2024-12-21T03:45:36Z | <p align="center">
<img src="images/gme_logo.png" alt="GME Logo" style="width: 100%; max-width: 450px;">
</p>
<p align="center"><b>GME: General Multimodal Embedding</b></p>
## GME-Qwen2-VL-2B
We are excited to present `GME-Qwen2VL` series of unified **multimodal embedding models**,
which are based on the advanced ... | [] |
athul020/pdw_final_dora | athul020 | 2026-03-04T10:43:42Z | 13 | 0 | peft | [
"peft",
"safetensors",
"lora",
"dora",
"cogvideox",
"physics",
"video-generation",
"warp",
"base_model:zai-org/CogVideoX-2b",
"base_model:adapter:zai-org/CogVideoX-2b",
"region:us"
] | null | 2026-03-04T10:42:19Z | # PDW — Physics-Corrected CogVideoX-2b World Model (DoRA Adapter)
A **DoRA (Weight-Decomposed Low-Rank Adaptation)** adapter for [CogVideoX-2b](https://huggingface.co/THUDM/CogVideoX-2b), fine-tuned to generate physically accurate videos using **NVIDIA Warp** physics simulation data and **TRD (Temporal Representation ... | [] |
lemonhat/Qwen2.5-7B-Instruct-agenttuning_wb_ws_os | lemonhat | 2025-09-21T21:54:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"regi... | text-generation | 2025-09-21T21:41:28Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# agenttuning_wb_ws_os
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Inst... | [] |
tingqli/guppylm-9M | tingqli | 2026-05-02T13:11:19Z | 0 | 0 | null | [
"pytorch",
"guppylm",
"fish",
"character",
"tiny-llm",
"text-generation",
"from-scratch",
"conversational",
"custom_code",
"en",
"license:mit",
"region:us"
] | text-generation | 2026-05-02T13:08:13Z | This is a refactor of [arman-bd/guppylm-9M](https://huggingface.co/arman-bd/guppylm-9M) to be compliant with [transformers's custom_model](https://huggingface.co/docs/transformers/custom_models).
```bash
python inference.py guppylm-9M
GuppyLMForCausalLM loaded: 8.7M params
Guppy Chat (type 'quit' to exit)
You> is... | [] |
WindyWord/translate-guw-de | WindyWord | 2026-04-27T23:59:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"gun",
"german",
"guw",
"de",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | translation | 2026-04-18T04:12:47Z | # WindyWord.ai Translation — Gun → German
**Translates Gun → German.**
**Quality Rating: ⭐⭐½ (2.5★ Basic)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs.
## Quality & Pricing Tier
- **5-star rating:** 2.5★ ⭐⭐½
- **Tier:** Basic
- **Composite score:** 51.2 ... | [] |
canl0we/helsinki-neutralization | canl0we | 2026-02-26T12:59:27Z | 54 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"simplification",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-es-es",
"base_model:finetune:Helsinki-NLP/opus-mt-es-es",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2026-02-26T12:01:37Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# helsinki-neutralization
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-es-es](https://huggingface.co/Helsinki-NLP/op... | [] |
matrixportalx/gpt-oss-20b-abliterated_3.0-Q4_K_M-GGUF | matrixportalx | 2025-12-27T13:26:06Z | 19 | 0 | transformers | [
"transformers",
"gguf",
"abliterated",
"roleplay",
"text-generation-inference",
"conversational",
"20b",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:IIEleven11/gpt-oss-20b-abliterated_3.0",
"base_model:quantized:IIEleven11/gpt-oss-20b-abliterated_3.0",
"license:apache-2.0",
... | text-generation | 2025-12-27T13:24:59Z | # matrixportalx/gpt-oss-20b-abliterated_3.0-Q4_K_M-GGUF
This model was converted to GGUF format from [`IIEleven11/gpt-oss-20b-abliterated_3.0`](https://huggingface.co/IIEleven11/gpt-oss-20b-abliterated_3.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refe... | [] |
zhangj1an/kimi_audio_7b_random | zhangj1an | 2026-05-03T09:49:52Z | 0 | 0 | kimi-audio | [
"kimi-audio",
"safetensors",
"vllm-omni",
"test-fixture",
"custom_code",
"license:apache-2.0",
"region:us"
] | null | 2026-05-03T09:48:24Z | # Kimi-Audio random / test fixture
Tiny **random-init** bundle of [Kimi-Audio-7B-Instruct](https://huggingface.co/moonshotai/Kimi-Audio-7B-Instruct)
for [vLLM-Omni](https://github.com/vllm-project/vllm-omni)'s L1/L2 `core_model` CI tests.
Verifies the full pipeline end-to-end without paying the ~42 GB checkpoint cost.... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.