modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
jumelet/gptbert-spa-250steps-base | jumelet | 2025-10-07T01:16:15Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_bert",
"feature-extraction",
"gpt-bert",
"babylm",
"remote-code",
"fill-mask",
"custom_code",
"license:other",
"region:us"
] | fill-mask | 2025-10-07T00:21:33Z | # jumelet/gptbert-spa-250steps-base
GPT-BERT style BabyBabyLLM model for language **spa**.
This repository may include both *main* and *EMA* variants.
**Default variant exposed to generic loaders:** `ema`
## Variants Available
ema, main
## Files
- model.safetensors (alias of default variant)
- model_ema.safetensor... | [] |
HwanLee/ACT-AgarBadge-v3 | HwanLee | 2025-12-31T04:46:00Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:lerobotForScienceEdu/Agar_Edit_60_v1_251230",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-31T04:45:35Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
Skywork/Unipic3 | Skywork | 2026-02-07T05:45:13Z | 38 | 20 | transformers | [
"transformers",
"diffusers",
"safetensors",
"text-to-image",
"image-editing",
"image-understanding",
"vision-language",
"multimodal",
"unified-model",
"teacher-model",
"diffusion",
"any-to-any",
"arxiv:2508.03320",
"arxiv:2509.04548",
"arxiv:2601.15664",
"license:mit",
"endpoints_com... | any-to-any | 2026-01-13T08:54:23Z | ## 🌌 UniPic3-Teacher-Model
<div align="center">
<img src="logo.png" alt="Skywork Logo" width="500">
</div>
<p align="center">
<a href="https://github.com/SkyworkAI/UniPic">
<img src="https://img.shields.io/badge/GitHub-UniPic-blue?logo=github" alt="GitHub Repo">
</a>
<a href="https://github.com/Skywor... | [
{
"start": 1130,
"end": 1161,
"text": "Few-step student model training",
"label": "training method",
"score": 0.7294083833694458
}
] |
CoRL2026-CSI/pi05_isaaclab_pnpbasetask | CoRL2026-CSI | 2026-04-24T10:28:47Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"pi05",
"robotics",
"dataset:CoRL2026-CSI/isaaclab-pickplacecube-so101-100ep",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-24T10:26:32Z | # Model Card for pi05
<!-- Provide a quick summary of what the model is/does. -->
**π₀.₅ (Pi05) Policy**
π₀.₅ is a Vision-Language-Action model with open-world generalization, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀.₅ repres... | [] |
mahmoudelbahy33/irrigation | mahmoudelbahy33 | 2025-12-12T15:39:20Z | 0 | 0 | null | [
"region:us"
] | null | 2025-12-12T15:38:55Z | # 🌱 نظام الري الذكي لملاعب كرة القدم في الرياض
# AI-Powered Irrigation System for Football Stadiums in Riyadh
<div align="center">


 Policy**
π₀.₅ is a Vision-Language-Action model with open-world generalization, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀.₅ repres... | [] |
JANGQ-AI/Qwen3.5-122B-A10B-JANG_4K | JANGQ-AI | 2026-03-22T19:39:42Z | 868 | 1 | mlx | [
"mlx",
"safetensors",
"qwen3_5_moe",
"jang",
"quantized",
"mixed-precision",
"apple-silicon",
"moe",
"vlm",
"reasoning",
"thinking",
"en",
"zh",
"ko",
"base_model:Qwen/Qwen3.5-122B-A10B",
"base_model:finetune:Qwen/Qwen3.5-122B-A10B",
"license:apache-2.0",
"region:us"
] | null | 2026-03-16T19:59:47Z | > **CRITICAL FIX (2026-03-19):** Fixed eos_token_id — previous versions caused infinite thinking loops. **You MUST re-download this model if you downloaded before today.**
> **Update (2026-03-18):** Models have been updated to v2.1.0 with VLM support, proper tokenizer, and fixed configs. **If you downloaded before t... | [] |
karthik/verl-qwen2.5-0.5b-gsm8k-ppo-step360 | karthik | 2025-09-21T22:30:39Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"verl",
"ppo",
"reinforcement-learning",
"math",
"reasoning",
"gsm8k",
"text-generation",
"conversational",
"en",
"dataset:gsm8k",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-09-21T17:08:02Z | # VERL Fine-tuned Qwen2.5-0.5B on GSM8K (Step 360)
This model is a **VERL (Volcano Engine Reinforcement Learning for LLMs)** fine-tuned version of Qwen2.5-0.5B-Instruct on the GSM8K mathematical reasoning dataset using PPO.
## Model Details
- **Base Model:** Qwen/Qwen2.5-0.5B-Instruct
- **Training Method:** VERL PP... | [
{
"start": 220,
"end": 223,
"text": "PPO",
"label": "training method",
"score": 0.7028411030769348
},
{
"start": 318,
"end": 321,
"text": "PPO",
"label": "training method",
"score": 0.8094030022621155
},
{
"start": 1096,
"end": 1099,
"text": "PPO",
"la... |
minato-ryan/cv-age-multi-3epoch | minato-ryan | 2025-12-24T01:46:59Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2025-12-24T01:36:10Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cv-age-multi-3epoch
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base)... | [] |
PhanTrungThuan/Mustango | PhanTrungThuan | 2025-11-25T11:28:35Z | 0 | 0 | null | [
"music",
"text-to-audio",
"text-to-music",
"dataset:amaai-lab/MusicBench",
"arxiv:2311.08355",
"license:apache-2.0",
"region:us"
] | text-to-audio | 2025-11-25T09:55:54Z | <div align="center">
# Mustango: Toward Controllable Text-to-Music Generation
[Demo](https://replicate.com/declare-lab/mustango) | [Model](https://huggingface.co/declare-lab/mustango) | [Website and Examples](https://amaai-lab.github.io/mustango/) | [Paper](https://arxiv.org/abs/2311.08355) | [Dataset](https://hu... | [] |
contemmcm/158428166f0722e00a24cdaea8b39720 | contemmcm | 2025-11-15T06:29:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"luke",
"text-classification",
"generated_from_trainer",
"base_model:studio-ousia/luke-base",
"base_model:finetune:studio-ousia/luke-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-15T06:23:25Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 158428166f0722e00a24cdaea8b39720
This model is a fine-tuned version of [studio-ousia/luke-base](https://huggingface.co/studio-ous... | [] |
hubnemo/so101_sort_smolvla_lora_mlp_rank8_bs64_lr1e-5_steps1000 | hubnemo | 2025-11-25T12:06:27Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:hubnemo/so101_sort",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-25T12:06:19Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
AfriScience-MT/gemma_3_4b_it-lora-r8-eng-yor | AfriScience-MT | 2026-02-10T15:20:31Z | 1 | 0 | peft | [
"peft",
"safetensors",
"translation",
"african-languages",
"scientific-translation",
"afriscience-mt",
"lora",
"gemma",
"en",
"yo",
"base_model:google/gemma-3-4b-it",
"base_model:adapter:google/gemma-3-4b-it",
"license:apache-2.0",
"model-index",
"region:us"
] | translation | 2026-02-10T15:20:23Z | # gemma_3_4b_it-lora-r8-eng-yor
[](https://huggingface.co/AfriScience-MT/gemma_3_4b_it-lora-r8-eng-yor)
This is a **LoRA adapter** for the AfriScience-MT project, enabling efficient scientific machine translation for African... | [
{
"start": 212,
"end": 216,
"text": "LoRA",
"label": "training method",
"score": 0.7474365830421448
},
{
"start": 541,
"end": 545,
"text": "LoRA",
"label": "training method",
"score": 0.7145588397979736
},
{
"start": 567,
"end": 571,
"text": "LoRA",
"l... |
treehugg3/dbrx-base-tokenizer-llamacpp | treehugg3 | 2025-08-12T21:41:16Z | 0 | 0 | transformers | [
"transformers",
"transformers.js",
"tokenizers",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T21:33:29Z | This is an updated version of <https://huggingface.co/LnL-AI/dbrx-base-tokenizer> which completes the tokenizer's vocabulary with extra unused tokens to ensure that `config.vocab_size == tokenizer.vocab_size`, which was [not the case](https://huggingface.co/databricks/dbrx-base/discussions/18) in the original model, ma... | [] |
SystechProducts/Wizard-2-Coder-7B-Instruct | SystechProducts | 2025-07-07T14:38:54Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-to-sql",
"sql-generation",
"reinforcement-learning",
"qwen",
"conversational",
"arxiv:2505.13271",
"license:cc-by-nc-4.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-17T04:19:12Z | # CSC-SQL: Corrective Self-Consistency in Text-to-SQL via Reinforcement Learning
The model presented in the paper [CSC-SQL: Corrective Self-Consistency in Text-to-SQL via Reinforcement Learning](https://huggingface.co/papers/2505.13271).
**Abstract:** Large language models (LLMs) have demonstrated strong capabilities... | [
{
"start": 2,
"end": 9,
"text": "CSC-SQL",
"label": "training method",
"score": 0.8292348980903625
},
{
"start": 116,
"end": 123,
"text": "CSC-SQL",
"label": "training method",
"score": 0.817588210105896
},
{
"start": 828,
"end": 835,
"text": "CSC-SQL",
... |
chohi/outputs | chohi | 2026-01-06T09:40:10Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"endpoints_compatible",
"region:us"
] | null | 2026-01-03T11:55:32Z | # Model Card for outputs
This model is a fine-tuned version of [google/gemma-3-27b-it](https://huggingface.co/google/gemma-3-27b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only ... | [] |
JamieYuu/crypto-news-custom-endpoint | JamieYuu | 2026-03-24T05:59:53Z | 0 | 0 | null | [
"joblib",
"crypto",
"sentiment-analysis",
"inference-endpoint",
"custom-handler",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2026-03-24T05:59:49Z | # Crypto News Custom Inference Endpoint
This repo is endpoint-ready for custom multi-input inference:
- text (news)
- btc_price_now
- fng_value
- fng_classification
Output fields:
- pred_class
- sentiment
- score
- prob_up
- confidence
## Deploy on Hugging Face Inference Endpoints
1. Go to Inference Endpoints and c... | [] |
LiquidAI/LFM2-350M-PII-Extract-JP-GGUF | LiquidAI | 2026-04-06T18:53:24Z | 376 | 8 | null | [
"gguf",
"liquid",
"lfm2",
"edge",
"base_model:LiquidAI/LFM2-350M-PII-Extract-JP",
"base_model:quantized:LiquidAI/LFM2-350M-PII-Extract-JP",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-30T07:19:05Z | ---
license: other
license_name: lfm1.0
license_link: LICENSE
tags:
- liquid
- lfm2
- edge
base_model:
- LiquidAI/LFM2-350M-PII-Extract-JP
---
<center>
<div style="text-align: center;">
<img
src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/2b08LKpev0DNEk6DlnWkY.png"
alt="... | [] |
hye-on0401/pick_and_place_v3_smolvla_no_belly | hye-on0401 | 2026-03-30T08:57:22Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:hye-on0401/pick_and_place_v3",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-30T08:56:56Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
cx-cmu/AutoGEO_mini_Qwen1.7B_ResearchyGEO | cx-cmu | 2026-04-11T01:49:02Z | 32 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-rewriting",
"web",
"generative-engine-optimization",
"geo",
"reinforcement-learning",
"grpo",
"conversational",
"en",
"dataset:cx-cmu/Researchy-GEO",
"arxiv:2510.11438",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:... | text-generation | 2025-09-30T04:52:36Z | # AutoGEO<sub>Mini</sub> (Qwen1.7B, Researchy-GEO)
AutoGEO<sub>Mini</sub> (Qwen1.7B, Researchy-GEO) is a GEO model designed to improve how web document is incorporated into answers generated by **LLM-based generative engines**.
The model rewrites a given document to better match the preferences of generative engine... | [] |
Abdurrahmanesc/finetuning-infinite-workflow | Abdurrahmanesc | 2025-11-25T14:39:49Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:gpt2",
"lora",
"transformers",
"text-generation",
"conversational",
"en",
"dataset:Abdurrahmanesc/textgen-synthetic",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-11-23T03:26:56Z | # Model Card for Model ID
This repository contains a LoRA-fine-tuned version of a base language model trained on a custom dataset focused on improving response coherence, text quality, and task-specific alignment.
The fine-tuning process was optimized for low-resource environments (CPU/TPU-friendly) while maintaining... | [
{
"start": 827,
"end": 839,
"text": "LoRA / QLoRA",
"label": "training method",
"score": 0.7111814022064209
}
] |
nightmedia/Qwen3-14B-CloudBlossom-dwq4f-mlx | nightmedia | 2025-12-30T03:32:28Z | 9 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3",
"coding",
"research",
"deep thinking",
"128k context",
"Qwen3",
"All use cases",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fiction story",
"sci... | text-generation | 2025-12-29T15:34:13Z | # Qwen3-14B-CloudBlossom-dwq4f-mlx
> Brainwave: 0.541,0.732,0.891,0.740,0.436,0.804,0.697
> What would the Q Continuum look like, if I were there?
Q > If you were to encounter the Q Continuum as depicted in *Star Trek*, it would be an experience unlike anything in ordinary reality. The Continuum is a metaphysical re... | [] |
sowilow/Next2-Air-DGX-Spark-GGUF | sowilow | 2026-04-13T05:16:07Z | 274 | 0 | null | [
"gguf",
"4-bit",
"blackwell-optimized",
"dgx-spark",
"next2-air",
"quantized",
"sm121",
"vlm",
"image-text-to-text",
"en",
"ko",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2026-04-02T03:38:40Z | ---
## 🚀 v0.1.6: Real-time Metrics & Blackwell-Optimized Docker (Recommended)
This model is fully compatible with the **[DGX-Spark-llama.cpp-Bench](https://github.com/sowilow/DGX-Spark-llama.cpp-Bench)**.
Experience the state-of-the-art inference engine optimized for NVIDIA Blackwell (DGX Spark) hardware.
### 🌟 Ke... | [] |
nluick/activation-oracle-multilayer-qwen3-4b-3L-step-15000 | nluick | 2026-01-05T21:31:10Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"base_model:Qwen/Qwen3-4B",
"base_model:adapter:Qwen/Qwen3-4B",
"region:us"
] | null | 2026-01-05T21:30:49Z | # LoRA Adapter for SAE Introspection
This is a LoRA (Low-Rank Adaptation) adapter trained for SAE (Sparse Autoencoder) introspection tasks.
## Base Model
- **Base Model**: `Qwen/Qwen3-4B`
- **Adapter Type**: LoRA
- **Task**: SAE Feature Introspection
## Usage
```python
from transformers import AutoModelForCausalLM,... | [] |
HPLT/hplt-pre3-per-crawl-ukr_Cyrl-llama-2b-30bt | HPLT | 2025-11-28T14:48:13Z | 0 | 0 | null | [
"safetensors",
"llama",
"uk",
"arxiv:2511.01066",
"license:apache-2.0",
"region:us"
] | null | 2025-11-27T19:28:47Z | # Model Description
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
* **Language:** Ukrainian
* **Developed by:** [HPLT](https://hplt-project.org/)
* **Paper:** [arxiv.org/abs/2511.01066](https://arxiv.org/abs/2511.01066)
* **Evaluation results:** [hf.co/datasets/HPLT/2505-... | [] |
mradermacher/Spatial-SSRL-7B-GGUF | mradermacher | 2025-11-03T22:37:50Z | 164 | 2 | transformers | [
"transformers",
"gguf",
"multimodal",
"spatial",
"sptial understanding",
"self-supervised learning",
"en",
"dataset:internlm/Spatial-SSRL-81k",
"base_model:internlm/Spatial-SSRL-7B",
"base_model:quantized:internlm/Spatial-SSRL-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
... | null | 2025-11-03T17:07:15Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
Haeryz/distilbert-base-uncased-finetuned-squad | Haeryz | 2025-12-03T12:37:49Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2025-12-03T11:51:26Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/di... | [] |
LayerEight/Cumulus-Qwen2.5-14B | LayerEight | 2026-04-24T16:37:59Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2.5-14B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"region:us"
] | text-generation | 2026-04-24T16:37:41Z | # Model Card for Cumulus-Qwen2.5-14B-output
This model is a fine-tuned version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a ti... | [] |
DevQuasar/Pinkstack.DistilGPT-OSS-qwen3-4B-GGUF | DevQuasar | 2025-09-26T02:05:53Z | 13 | 0 | null | [
"gguf",
"text-generation",
"base_model:Pinkstack/DistilGPT-OSS-qwen3-4B",
"base_model:quantized:Pinkstack/DistilGPT-OSS-qwen3-4B",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-09-26T01:49:01Z | [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [Pinkstack/DistilGPT-OSS-qwen3-4B](https://huggingface.co/Pinkstack/DistilGPT-OSS-qwen3-4B)
'Make knowledge free for everyone'
<p align="center">
Ma... | [] |
hw862/ti-ti_dog6_shot5_seed0 | hw862 | 2026-05-03T08:11:43Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"diffusers-training",
"textual_inversion",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml... | text-to-image | 2026-05-03T07:44:44Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Textual inversion text2image fine-tuning - hw862/ti-ti_dog6_shot5_seed0
These are textual inversion adaption weights for ... | [] |
laion/Sera-4.6-Lite-T2-v4-316-axolotl__Qwen3-8B-v2 | laion | 2026-04-24T00:27:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"dataset:laion/Sera-4.6-Lite-T2-v4-316",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-24T00:08:38Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" wid... | [] |
takeshi200ok/qwen3-4B-dpo-anti-fence-240slow26 | takeshi200ok | 2026-02-28T09:55:45Z | 137 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"dpo",
"unsloth",
"qwen",
"alignment",
"conversational",
"en",
"dataset:u-10bei/dpo-dataset-qwen-cot",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"text-gener... | text-generation | 2026-02-26T13:05:46Z | # qwen3-4B-dpo-anti-fence-240slow26
## Initialization
This DPO training started from an SFT LoRA adapter:
- **SFT Adapter**: takeshi200ok/qwen3-4B-lora-repo2-stage2-toml
The final uploaded artifact is a fully merged 16-bit model (base + SFT + DPO).
This model is a fine-tuned version of **Qwen/Qwen3-4B-Instruct-2507*... | [
{
"start": 60,
"end": 63,
"text": "DPO",
"label": "training method",
"score": 0.9000850319862366
},
{
"start": 245,
"end": 248,
"text": "DPO",
"label": "training method",
"score": 0.8279013633728027
},
{
"start": 330,
"end": 360,
"text": "Direct Preference... |
Muapi/stoiqo-newreality-lora | Muapi | 2025-08-28T17:46:06Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-28T17:45:45Z | # STOIQO NewReality LoRA

**Base model**: Flux.1 D
**Trained words**:
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type":... | [] |
Nharen/Reward_Rush_SAC_Half_Cheetah | Nharen | 2025-12-31T12:03:31Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"reinforcement-learning",
"mujoco",
"halfcheetah",
"sac",
"license:mit",
"model-index",
"region:us"
] | reinforcement-learning | 2025-12-31T10:13:17Z | # Reward Rush: HalfCheetah SAC
This repository contains a Soft Actor-Critic (SAC) agent trained for the HalfCheetah-v4 environment.
## Model Architecture
The SAC actor is a multi-layer perceptron with the following specifications:
- **Input:** 17 state observations
- **Output:** 6 continuous actions
- **Archite... | [] |
zelk12/gemma-3-12b-deepseek-r1-v1-merged-16bit-Q6_K-GGUF | zelk12 | 2025-08-12T14:11:51Z | 8 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:ykarout/gemma-3-12b-deepseek-r1-v1-merged-16bit",
"base_model:quantized:ykarout/gemma-3-12b-deepseek-r1-v1-merged-16bit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-12T14:11:10Z | # zelk12/gemma-3-12b-deepseek-r1-v1-merged-16bit-Q6_K-GGUF
This model was converted to GGUF format from [`ykarout/gemma-3-12b-deepseek-r1-v1-merged-16bit`](https://huggingface.co/ykarout/gemma-3-12b-deepseek-r1-v1-merged-16bit) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf... | [] |
notmax123/blue-onnx | notmax123 | 2026-04-24T13:14:13Z | 0 | 3 | null | [
"onnx",
"text-to-speech",
"tts",
"hebrew",
"audio",
"fast-inference",
"multilingual",
"dataset:notmax123/RanLevi40h",
"dataset:notmax123/SententicDataTTS",
"license:mit",
"region:us"
] | text-to-speech | 2026-02-28T20:23:00Z | # Blue ONNX — Text-to-speech inference
This repository is the **ONNX model bundle** for **[BlueTTS](https://github.com/maxmelichov/BlueTTS)**: fast Hebrew-first multilingual speech synthesis with [ONNX Runtime](https://onnxruntime.ai/) and optional NVIDIA TensorRT engines (see the GitHub repo).
**Project home (instal... | [] |
microsoft/FrogMini-14B-2510 | microsoft | 2026-01-15T13:48:52Z | 1,633 | 61 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2510.19898",
"base_model:Qwen/Qwen3-14B",
"base_model:finetune:Qwen/Qwen3-14B",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"deploy:azure",
"region:us"
] | text-generation | 2026-01-09T16:43:09Z | # FrogMini-14B-2510
| **Field** | **Value** |
|----------|-----------|
| Developer | Microsoft Corporation<br>**Authorized representative: Microsoft Ireland Operations Limited 70 Sir John Rogerson’s Quay, Dublin 2, D02 R296, Ireland** |
| Description | FrogMini is a 14B-parameter coding agent specialized in fixing bu... | [] |
Anwesha026/fine-tuned-gpt-oss-20b | Anwesha026 | 2025-08-25T21:03:39Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"region:us"
] | null | 2025-08-25T20:16:46Z | # GPT-OSS-20B Empathetic (LoRA Fine-tuned)
This model is a **LoRA fine-tuned adapter** built on top of [unsloth/gpt-oss-20b-unsloth-bnb-4bit](https://huggingface.co/unsloth/gpt-oss-20b-unsloth-bnb-4bit).
It specializes in generating **empathetic and supportive responses**, making it suitable for conversational AI us... | [] |
sayhan/OpenHermes-2.5-Strix-Philosophy-Mistral-7B-LoRA | sayhan | 2024-02-18T15:53:23Z | 443 | 7 | transformers | [
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation",
"trl",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"dataset:sayhan/strix-philosophy-qa",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:quantized:teknium/OpenHermes-2.5-Mistral-7B",
"licen... | text-generation | 2024-02-17T12:09:58Z | 
# OpenHermes 2.5 Stix Philosophy Mistral 7B
- **Finetuned by:** [sayhan](https://huggingface.co/sayhan)
- **License:** [apache-2.0](https://choosealicense.com/licenses/apache-2.0/)
- **Finetuned from m... | [] |
ASethi04/qwen-2.5-7b-legalbench-first | ASethi04 | 2025-09-03T14:26:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-03T14:26:15Z | # Model Card for Qwen-Qwen2.5-7B-legalbench-first-lora
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machin... | [] |
aShunSasaki/so101_pp_blue_box_bg_gray_100_02_policy | aShunSasaki | 2026-03-12T16:14:31Z | 42 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:aShunSasaki/so101_pp_blue_box_bg_gray_100_02",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-12T16:14:08Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
NSH0407/pcam-densenet121-cnn | NSH0407 | 2025-12-11T13:27:57Z | 6 | 0 | keras | [
"keras",
"region:us"
] | null | 2025-12-11T12:27:09Z | # PCam DenseNet121 CNN (Fold 0)
This repository contains a TensorFlow/Keras DenseNet121-based CNN trained on the PatchCamelyon (PCam) dataset for binary histopathology image classification (tumor vs. normal).
This model corresponds to **fold 0**, which performed best in a 3-fold ensemble.
## Model details
- **Frame... | [] |
EvanEternal/Astra | EvanEternal | 2025-12-15T12:47:57Z | 0 | 4 | diffusion | [
"diffusion",
"diffusers",
"video-generation",
"world-model",
"arxiv:2512.08931",
"arxiv:2511.18870",
"license:mit",
"region:us"
] | null | 2025-12-08T08:55:37Z | # Astra 🌏: General Interactive World Model with Autoregressive Denoising
<div align="center">
<div style="margin-top: 0; margin-bottom: -20px;">
<img src="./assets/images/logo-text-2.png" width="50%" />
</div>
<h3 style="margin-top: 0;">
📄
[<a href="https://arxiv.org/abs/2512.08931" target="_blan... | [] |
tussiiiii/qwen3-4b-structured-output-lora-continued-v5-daichira-ver3-5 | tussiiiii | 2026-02-08T12:19:03Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v5",
"dataset:daichira/structured-5k-mix-sft",
"dataset:daichira/structured-hard-sft-4k",
"base... | text-generation | 2026-02-08T12:18:50Z | qwen3-4b-structured-output-lora-continued-v5-daichira-ver3-5
A LoRA adapter specialized for **structured output generation**
(JSON / YAML / XML / TOML / CSV) in long-input settings.
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This adap... | [
{
"start": 284,
"end": 289,
"text": "QLoRA",
"label": "training method",
"score": 0.776921808719635
},
{
"start": 1220,
"end": 1225,
"text": "QLoRA",
"label": "training method",
"score": 0.7057996988296509
}
] |
mradermacher/O1Prunner-3B-GGUF | mradermacher | 2026-03-15T20:45:32Z | 256 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:tp140205/O1Prunner-3B",
"base_model:quantized:tp140205/O1Prunner-3B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-15T20:21:17Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
aeon37/DeepSeek-V2-Lite | aeon37 | 2026-02-15T22:30:08Z | 5 | 0 | null | [
"safetensors",
"deepseek_v2",
"heretic",
"uncensored",
"decensored",
"abliterated",
"custom_code",
"arxiv:2405.04434",
"license:other",
"region:us"
] | null | 2026-02-15T22:28:49Z | # This is a decensored version of [deepseek-ai/DeepSeek-V2-Lite](https://huggingface.co/deepseek-ai/DeepSeek-V2-Lite), made using [Heretic](https://github.com/p-e-w/heretic) v1.1.0
## Abliteration parameters
| Parameter | Value |
| :-------- | :---: |
| **direction_index** | 20.06 |
| **attn.o_proj.max_weight** | 0.9... | [] |
limcheekin/functiongemma-mobile-actions-GGUF | limcheekin | 2025-12-27T10:16:05Z | 35 | 0 | null | [
"gguf",
"gemma3_text",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-27T10:15:28Z | # functiongemma-mobile-actions-GGUF : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `./llama.cpp/llama-cli -hf limcheekin/functiongemma-mobile-actions-GGUF --jinja`
- For multimodal models: `./llama.cpp/lla... | [
{
"start": 105,
"end": 112,
"text": "Unsloth",
"label": "training method",
"score": 0.8495667576789856
},
{
"start": 143,
"end": 150,
"text": "unsloth",
"label": "training method",
"score": 0.8463811278343201
},
{
"start": 599,
"end": 606,
"text": "Unsloth... |
mardonbekhazratov/gpt2-trained-from-scratch | mardonbekhazratov | 2026-02-24T09:33:40Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-24T07:01:56Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-trained-from-scratch
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model desc... | [] |
Aishwarya0803/smolified-banglish-ner | Aishwarya0803 | 2026-03-29T08:10:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"smolify",
"dslm",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-29T08:09:52Z | # 🤏 smolified-banglish-ner
> **Intelligence, Distilled.**
This is a **Domain Specific Language Model (DSLM)** generated by the **Smolify Foundry**.
It has been synthetically distilled from SOTA reasoning engines into a high-efficiency architecture, optimized for deployment on edge hardware (CPU/NPU) or low-VRAM env... | [
{
"start": 457,
"end": 488,
"text": "Proprietary Neural Distillation",
"label": "training method",
"score": 0.7579706907272339
}
] |
qualiaadmin/1a53a559-93d5-48f7-af1b-5c572066de72 | qualiaadmin | 2025-11-18T20:08:39Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:Calvert0921/SmolVLA_LiftCube_Franka_100",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-18T20:08:24Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
mradermacher/Shako-4B-it-GGUF | mradermacher | 2025-11-15T12:09:12Z | 30 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3n",
"iraqi-dialect",
"arabic",
"en",
"ar",
"dataset:anaspro/iraqi_dataset_100k",
"base_model:anaspro/Shako-iraqi-4B-it",
"base_model:quantized:anaspro/Shako-iraqi-4B-it",
"license:apache-2.0",
"endpoints_compatible",
... | null | 2025-10-28T07:53:47Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
jialicheng/unlearn-so_cifar10_resnet-50_salun_10_13 | jialicheng | 2025-10-29T04:27:13Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"resnet",
"image-classification",
"vision",
"generated_from_trainer",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-10-29T04:26:37Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 13
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the cifar10 dataset... | [] |
jumelet/gptbert-ban-100steps-small | jumelet | 2025-10-04T20:27:08Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_bert",
"feature-extraction",
"gpt-bert",
"babylm",
"remote-code",
"fill-mask",
"custom_code",
"license:other",
"region:us"
] | fill-mask | 2025-10-04T20:26:58Z | # jumelet/gptbert-ban-100steps-small
GPT-BERT style BabyBabyLLM model for language **ban**.
This repository may include both *main* and *EMA* variants.
**Default variant exposed to generic loaders:** `ema`
## Variants Available
ema, main
## Files
- model.safetensors (alias of default variant)
- model_ema.safetenso... | [] |
exolabs/FLUX.1-Krea-dev-8bit | exolabs | 2026-01-26T16:52:16Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"image-generation",
"flux",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2026-01-26T16:27:16Z | ![FLUX.1 Krea [dev] Grid](./teaser.png)
`FLUX.1 Krea [dev]` is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions.
For more information, please read our [blog post](https://bfl.ai/announcements/flux-1-krea-dev) and [Krea's blog post](https://www.krea.ai/blog/flux-kre... | [] |
squeakmouse/act_so101_cube_1ksteps | squeakmouse | 2025-12-20T20:54:43Z | 1 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:squeakmouse/recordtest",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-20T18:54:40Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
UnifiedHorusRA/wan2.2-i2v-high-ArachnidChic | UnifiedHorusRA | 2025-09-04T21:24:57Z | 0 | 0 | null | [
"custom",
"art",
"en",
"region:us"
] | null | 2025-09-04T20:39:18Z | # wan2.2-i2v-high-ArachnidChic
**Creator**: [hxxwoq2222](https://civitai.com/user/hxxwoq2222)
**Type**: LORA
**Base Model**: Wan Video 2.2 I2V-A14B
**Version**: HIGH-v1.0
**Trigger Words**: `N/A`
**Civitai Model ID**: 1880038
**Civitai Version ID**: 2127938
**Stats (at time of fetch for this version)**:
* Download... | [] |
nasaradan/clush_Qn3_0.25 | nasaradan | 2025-10-25T02:03:57Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen3-0.6B",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-0.6B",
"region:us"
] | text-generation | 2025-10-25T02:03:05Z | # Model Card for output_qwen3_lora_counsel
This model is a fine-tuned version of [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could... | [] |
plutonupv/Estigia_Catalan-Q4_K_M-GGUF | plutonupv | 2025-12-18T11:07:20Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:plutonupv/Estigia_Catalan",
"base_model:quantized:plutonupv/Estigia_Catalan",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-18T11:07:10Z | # plutonupv/Estigia_Catalan-Q4_K_M-GGUF
This model was converted to GGUF format from [`plutonupv/Estigia_Catalan`](https://huggingface.co/plutonupv/Estigia_Catalan) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://hug... | [] |
Kazumay/qwen3-4b-struct-sft-v4 | Kazumay | 2026-02-07T00:42:39Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-07T00:42:20Z | qwen3-4b-structured-output-lora
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **s... | [
{
"start": 133,
"end": 138,
"text": "QLoRA",
"label": "training method",
"score": 0.832840621471405
},
{
"start": 187,
"end": 191,
"text": "LoRA",
"label": "training method",
"score": 0.7006934881210327
},
{
"start": 574,
"end": 579,
"text": "QLoRA",
"... |
carlesoctav/100 | carlesoctav | 2026-01-11T05:03:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"conversational",
"arxiv:1905.07830",
"arxiv:1905.10044",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1705.03551",
"arxiv:1911.01547",
"arxiv:1907.10641",
"arxiv:1903.00161",
"arxiv:2009.03300",
"arxiv:2304.06364",
"ar... | text-generation | 2026-01-11T04:29:01Z | # Gemma 3 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core)
**Resources and Technical Documentation**:
* [Gemma 3 Technical Report][g3-tech-report]
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma3]
**Terms ... | [] |
GMorgulis/Qwen2.5-7B-Instruct-immigration-NORMAL-ft0.42 | GMorgulis | 2026-03-12T12:18:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-03-12T11:42:19Z | # Model Card for Qwen2.5-7B-Instruct-immigration-NORMAL-ft0.42
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question =... | [] |
compnet-renard/t5-small-literary-relation-extraction | compnet-renard | 2025-09-14T13:18:25Z | 1 | 0 | null | [
"safetensors",
"t5",
"literature",
"text-generation",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:mit",
"region:us"
] | text-generation | 2025-09-13T11:39:27Z | # compnet-renard/t5-small-literary-relation-extraction
A generative relation extraction model trained on the [Despina/project_gutenberg](https://huggingface.co/datasets/Despina/project_gutenberg) dataset.
Example usage:
```python
from transformers import pipeline
pipeline = pipeline("text2text-generation", model="c... | [] |
dyseo04/use_data_finetuning | dyseo04 | 2025-10-21T14:48:42Z | 3 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2025-10-21T14:11:53Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# use_data_finetuning
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-5... | [] |
ferrazzipietro/crfTask-unsup-Qwen3-1.7B-datav3-all-only_mask_w_item | ferrazzipietro | 2026-04-22T11:50:43Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:ferrazzipietro/unsup-Qwen3-1.7B-datav3-only_mask_w_item",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:ferrazzipietro/unsup-Qwen3-1.7B-datav3-only_mask_w_item",
"region:us"
] | text-generation | 2026-04-22T11:39:25Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# crfTask-unsup-Qwen3-1.7B-datav3-all-only_mask_w_item
This model is a fine-tuned version of [ferrazzipietro/unsup-Qwen3-1.7B-datav... | [
{
"start": 536,
"end": 544,
"text": "F1 Macro",
"label": "training method",
"score": 0.8734161257743835
},
{
"start": 555,
"end": 566,
"text": "F1 Weighted",
"label": "training method",
"score": 0.9261366724967957
},
{
"start": 1310,
"end": 1318,
"text": "... |
Alelcv27/llama3-1b-linear-v2 | Alelcv27 | 2025-10-24T17:52:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2203.05482",
"base_model:Alelcv27/llama3-1b-code-dpo",
"base_model:merge:Alelcv27/llama3-1b-code-dpo",
"base_model:Alelcv27/llama3-1b-math-dpo",
"base_model:merge:Alelcv27/llama3-1b-math-dpo",
"text-generati... | text-generation | 2025-10-24T17:33:53Z | # llama3-1b-linear-v2
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were include... | [] |
0xhb/maya1-mlx-8bit | 0xhb | 2026-03-15T18:02:54Z | 10 | 0 | mlx | [
"mlx",
"safetensors",
"llama",
"tts",
"maya1",
"apple-silicon",
"snac",
"text-to-speech",
"base_model:maya-research/maya1",
"base_model:quantized:maya-research/maya1",
"8-bit",
"region:us"
] | text-to-speech | 2026-03-15T17:59:59Z | # maya1-mlx-8bit
MLX 8-bit quantized conversion of [maya-research/maya1](https://huggingface.co/maya-research/maya1) for text-to-speech on Apple Silicon. Converted using [mlx-lm](https://github.com/ml-explore/mlx-examples/tree/main/llms/mlx_lm).
> **Recommended variant.** Best balance of quality and speed — near real... | [] |
Muapi/bruce-holwerda | Muapi | 2025-08-22T04:04:02Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-22T04:03:55Z | # Bruce Holwerda

**Base model**: Flux.1 D
**Trained words**: abstract style of Bruce Holwerda
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
he... | [] |
zhuojing-huang/gpt2-chinese-english-bi-vocab-mono-1 | zhuojing-huang | 2026-02-27T14:32:17Z | 44 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-27T00:09:41Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-chinese-english-bi-vocab-mono-1
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## M... | [] |
kdru0077/lora-repo | kdru0077 | 2026-03-01T13:31:26Z | 9 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-10T11:12:47Z | <qwen3-4b-structured-output-lora-02>
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improv... | [
{
"start": 138,
"end": 143,
"text": "QLoRA",
"label": "training method",
"score": 0.7456819415092468
}
] |
Cannae-AI/TANIT-V0.6-4B-IT-gguf | Cannae-AI | 2025-11-12T20:19:19Z | 8 | 0 | transformers | [
"transformers",
"gguf",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"vision",
"en",
"fr",
"base_model:Cannae-AI/TANIT-V0.6-4B-IT",
"base_model:quantized:Cannae-AI/TANIT-V0.6-4B-IT",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-11-11T15:29:12Z | # TANIT-V0.6-4B-IT-gguf
- **Developed by:** CannaeAI
- **License:** apache-2.0
- **quantized by :** Cannae-AI
- **Base model :** CannaeAI/TANIT-V0.6-4B-IT
## Available Model files:
- `TANIT-V0.6-4b-it.Q8_0.gguf`
- `TANIT-V0.6-4b-it.BF16-mmproj.gguf`
## ⚠️ Ollama Note for Vision Models
**Important:** Ollama currentl... | [] |
deepseek-ai/DeepSeek-Coder-V2-Instruct | deepseek-ai | 2024-08-21T06:42:50Z | 22,798 | 685 | transformers | [
"transformers",
"safetensors",
"deepseek_v2",
"text-generation",
"conversational",
"custom_code",
"arxiv:2401.06066",
"base_model:deepseek-ai/DeepSeek-Coder-V2-Base",
"base_model:finetune:deepseek-ai/DeepSeek-Coder-V2-Base",
"license:other",
"text-generation-inference",
"endpoints_compatible",... | text-generation | 2024-06-14T03:46:22Z | <!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V2" />
</div>
<hr>
<div align="center" style="line-... | [] |
kaitchup/translategemma-4b-it-FP8-Dynamic | kaitchup | 2026-01-19T17:37:11Z | 61 | 1 | null | [
"safetensors",
"gemma3",
"dataset:kaitchup/opus100-translategemma-calib",
"base_model:google/translategemma-4b-it",
"base_model:quantized:google/translategemma-4b-it",
"license:gemma",
"compressed-tensors",
"region:us"
] | null | 2026-01-17T19:25:48Z | This is a quantized variant of **google/translategemma-4b-it**, created by **The Kaitchup** (newsletter: https://kaitchup.substack.com).
More details (training recipe, benchmarks, and recommended settings) will be added later. In the meantime, here are the current notes and a working inference example.
## Status / li... | [
{
"start": 704,
"end": 708,
"text": "vLLM",
"label": "training method",
"score": 0.7048895955085754
}
] |
DJLougen/Ornstein-26B-A4B-it | DJLougen | 2026-04-10T17:14:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma4",
"image-text-to-text",
"reasoning",
"unsloth",
"ddm",
"lora",
"sft",
"conversational",
"en",
"base_model:unsloth/gemma-4-26B-A4B-it",
"base_model:adapter:unsloth/gemma-4-26B-A4B-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-04-10T15:38:26Z | 
# Ornstein-26B-A4B-it
A reasoning-focused fine-tune of [Google Gemma 4 26B-A4B-it](https://huggingface.co/unsloth/gemma-4-26B-A4B-it), trained on a small, high-quality dataset curated through a custo... | [] |
Hemgg/deepfake_model_Video-MAEX | Hemgg | 2025-09-08T01:28:50Z | 2 | 0 | null | [
"safetensors",
"videomae",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base-finetuned-kinetics",
"base_model:finetune:MCG-NJU/videomae-base-finetuned-kinetics",
"region:us"
] | null | 2025-09-08T01:07:09Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deepfake_model_Video-MAEX
This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingfa... | [] |
andtt/AI21-Jamba-Reasoning-3B-Q8_0-GGUF | andtt | 2025-10-29T07:37:09Z | 14 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:ai21labs/AI21-Jamba-Reasoning-3B",
"base_model:quantized:ai21labs/AI21-Jamba-Reasoning-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-10-28T13:18:16Z | # andtt/AI21-Jamba-Reasoning-3B-Q8_0-GGUF
This model was converted to GGUF format from [`ai21labs/AI21-Jamba-Reasoning-3B`](https://huggingface.co/ai21labs/AI21-Jamba-Reasoning-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model c... | [] |
loki200519/urop | loki200519 | 2025-12-14T18:11:00Z | 0 | 0 | null | [
"onnx",
"region:us"
] | null | 2025-12-14T17:49:59Z | # UROP — Physics-Informed UNet++ for Multi-Hazard Detection
## Overview
Physics-informed deep learning model for large-scale multi-hazard segmentation
using satellite imagery (SAR + optical).
## Architecture
- UNet++ backbone (ResNet34 encoder)
- Physics-informed regularization
- Optuna-tuned hyperparameters
## Key ... | [] |
allenai/Olmo-3-32B-Think-SFT | allenai | 2026-01-05T16:25:54Z | 921 | 4 | transformers | [
"transformers",
"safetensors",
"olmo3",
"text-generation",
"conversational",
"en",
"dataset:allenai/Dolci-Think-SFT",
"arxiv:2512.13961",
"base_model:allenai/Olmo-3-1125-32B",
"base_model:finetune:allenai/Olmo-3-1125-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-14T19:45:08Z | ## Model Details
<img alt="Logo for Olmo 3 32B Think model" src="olmo-think.png" width="240px" style="margin-left:'auto' margin-right:'auto' display:'block'">
# Model Card for Olmo 3 32B Think SFT
We introduce Olmo 3, a new family of 7B and 32B models both Instruct and Think variants. Long chain-of-thought thinking ... | [
{
"start": 260,
"end": 268,
"text": "Instruct",
"label": "training method",
"score": 0.7073741555213928
}
] |
JustArchon/klue-roberta-base-klue-sts | JustArchon | 2025-08-12T01:17:19Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-08-12T01:16:59Z | # {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when y... | [] |
komus/physicase_tuned_gemma3 | komus | 2026-01-06T22:25:03Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-01-06T16:03:16Z | ---
base_model: google/gemma-3-1b-pt
language: en
license: apache-2.0
---
# Fine-tuned Model with LoRA
## Model Description
This model is a fine-tuned.
## Training Configuration
### LoRA Parameters
- **Rank (r):** 4
- **Alpha:** 8
- **Dropout:** 0.05
- **Target Modules:** ['q_proj', 'v_proj']
... | [
{
"start": 198,
"end": 202,
"text": "LoRA",
"label": "training method",
"score": 0.7443071603775024
}
] |
witgaw/DCRNN_PEMS-BAY | witgaw | 2025-11-02T14:06:56Z | 3 | 0 | null | [
"safetensors",
"traffic-forecasting",
"time-series",
"graph-neural-network",
"dcrnn",
"dataset:pems-bay",
"doi:10.57967/hf/6888",
"region:us"
] | null | 2025-11-01T21:05:25Z | # DCRNN Model - PEMS-BAY
Diffusion Convolutional Recurrent Neural Network (DCRNN) trained on PEMS-BAY dataset for traffic speed forecasting.
## Model Description
This model uses a graph neural network architecture that combines:
- Diffusion convolution to capture spatial dependencies on road networks
- Recurrent neu... | [
{
"start": 363,
"end": 392,
"text": "Sequence-to-sequence learning",
"label": "training method",
"score": 0.7381479144096375
}
] |
felixwangg/Qwen2.5-Coder-7B-sft-minus-alpha-0p5-token-diff-ctx0-v2 | felixwangg | 2026-04-16T03:45:55Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"text-generation",
"axolotl",
"base_model:adapter:Qwen/Qwen2.5-Coder-7B-Instruct",
"lora",
"transformers",
"conversational",
"dataset:felixwangg/prime_vul_minus_splitted_token_diff_mask_skip_indent_ctx0_chat_v2",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"lic... | text-generation | 2026-04-16T03:45:42Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" wid... | [] |
Lightricks/LTX-Video-0.9.8-13B-distilled | Lightricks | 2025-07-17T15:03:34Z | 2,882 | 26 | diffusers | [
"diffusers",
"safetensors",
"ltx-video",
"image-to-video",
"en",
"license:other",
"diffusers:LTXConditionPipeline",
"region:us"
] | image-to-video | 2025-07-17T12:00:38Z | # LTX-Video 0.9.8 13B Distilled Model Card
This model card focuses on the model associated with the LTX-Video model, codebase available [here](https://github.com/Lightricks/LTX-Video).
LTX-Video is the first DiT-based video generation model capable of generating high-quality videos in real-time. It produces 30 FPS vid... | [
{
"start": 1536,
"end": 1552,
"text": "ComfyUI workflow",
"label": "training method",
"score": 0.7624415755271912
}
] |
Hanyang-W/zephyr-7b-dpo-full | Hanyang-W | 2025-08-08T09:53:11Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"base_model:finetune:alignment-handbook/zephyr-7b-sft-full",
"text-generation-inferenc... | text-generation | 2025-08-08T07:09:16Z | # Model Card for zephyr-7b-dpo-full
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = ... | [
{
"start": 205,
"end": 208,
"text": "TRL",
"label": "training method",
"score": 0.8112407922744751
},
{
"start": 963,
"end": 966,
"text": "DPO",
"label": "training method",
"score": 0.8263885378837585
},
{
"start": 1142,
"end": 1145,
"text": "TRL",
"la... |
abdelkader-dev/algGPT-coder-003-3B | abdelkader-dev | 2026-04-22T12:34:16Z | 0 | 0 | null | [
"gguf",
"llama",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us"
] | null | 2026-04-22T12:33:34Z | # algGPT-coder-003-3B : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `llama-cli -hf abdelkader-dev/algGPT-coder-003-3B --jinja`
- For multimodal models: `llama-mtmd-cli -hf abdelkader-dev/algGPT-coder-003-... | [
{
"start": 91,
"end": 98,
"text": "Unsloth",
"label": "training method",
"score": 0.8108809590339661
},
{
"start": 129,
"end": 136,
"text": "unsloth",
"label": "training method",
"score": 0.8373802304267883
},
{
"start": 421,
"end": 428,
"text": "Unsloth",... |
Muapi/flux-anime-blue-archive-style | Muapi | 2025-08-19T21:01:32Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T21:01:13Z | # Flux Anime Blue Archive Style

**Base model**: Flux.1 D
**Trained words**:
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content... | [] |
huskyhong/wzryyykl-swk-dyh | huskyhong | 2026-01-13T17:27:27Z | 0 | 0 | null | [
"pytorch",
"region:us"
] | null | 2026-01-13T09:21:38Z | # 王者荣耀语音克隆-孙悟空-地狱火
基于 VoxCPM 的王者荣耀英雄及皮肤语音克隆模型系列,支持多种英雄和皮肤的语音风格克隆与生成。
## 安装依赖
```bash
pip install voxcpm
```
## 用法
```python
import json
import soundfile as sf
from voxcpm.core import VoxCPM
from voxcpm.model.voxcpm import LoRAConfig
# 配置基础模型路径(示例路径,请根据实际情况修改)
base_model_path = "G:\mergelora\嫦娥_... | [] |
philldevcoder/my_awesome_food_model | philldevcoder | 2025-08-31T01:38:15Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-08-30T20:17:19Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit... | [] |
Lijr2002/e_emotion | Lijr2002 | 2026-03-06T08:32:53Z | 160 | 0 | null | [
"gguf",
"qwen2",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-04T11:47:15Z | # e_emotion : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `llama-cli -hf Lijr2002/e_emotion --jinja`
- For multimodal models: `llama-mtmd-cli -hf Lijr2002/e_emotion --jinja`
## Available Model files:
- `... | [
{
"start": 119,
"end": 126,
"text": "unsloth",
"label": "training method",
"score": 0.7664060592651367
},
{
"start": 486,
"end": 493,
"text": "unsloth",
"label": "training method",
"score": 0.7221669554710388
}
] |
mohtani777/Qwen3_4B_SFT_DPOv1_agent_v0LR1E6 | mohtani777 | 2026-02-27T08:50:00Z | 38 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"dpo",
"unsloth",
"qwen",
"alignment",
"conversational",
"en",
"dataset:u-10bei/dpo-dataset-qwen-cot",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"text-gener... | text-generation | 2026-02-27T08:46:55Z | # Qwen3_4B_SFT_DPOv1_agent_v0LR1E6
This model is a fine-tuned version of **Qwen/Qwen3-4B-Instruct-2507** using **Direct Preference Optimization (DPO)** via the **Unsloth** library.
This repository contains the **full-merged 16-bit weights**. No adapter loading is required.
## Training Objective
This model has been o... | [
{
"start": 114,
"end": 144,
"text": "Direct Preference Optimization",
"label": "training method",
"score": 0.8614237308502197
},
{
"start": 146,
"end": 149,
"text": "DPO",
"label": "training method",
"score": 0.8719958662986755
},
{
"start": 335,
"end": 338,
... |
AshleyQu0311/Qwen3-4B-Structured-Conversion-LoRA-v19-Precision | AshleyQu0311 | 2026-02-18T15:32:01Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-18T15:31:13Z | Qwen3-4B-Structured-Conversion-LoRA-v19-Precision
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trai... | [
{
"start": 151,
"end": 156,
"text": "QLoRA",
"label": "training method",
"score": 0.8155202269554138
},
{
"start": 592,
"end": 597,
"text": "QLoRA",
"label": "training method",
"score": 0.7084330320358276
}
] |
ghawarrr-ghaith/test | ghawarrr-ghaith | 2026-03-01T22:58:04Z | 0 | 0 | null | [
"region:us"
] | null | 2026-03-01T22:54:13Z | # هندسة ذكاء فوتلين (Votelyn Intelligence Architecture)
ذكاء فوتلين (Votelyn Intelligence) هو خدمة مصغرة للذكاء الاصطناعي عالية الدقة توفر تحليلاً دلالياً عميقاً للخطاب السياسي والاجتماعي. تم تصميمه للتعامل مع المدخلات متعددة اللغات (الإنجليزية، العربية، الفرنسية) مع الحفاظ على دقة فائقة عبر استراتيجية "توحيد اللغة ... | [] |
Novaciano/Gemma3-Radamanthys-1B | Novaciano | 2025-12-13T07:16:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"mergekit",
"merge",
"base_model:Roman0/gemma-3-1b-it-heretic",
"base_model:merge:Roman0/gemma-3-1b-it-heretic",
"base_model:hereticness/heretic_DevilsAdvocate-1B",
"base_model:merge:hereticness/heretic_DevilsAdvocate-1B",
"text-ge... | text-generation | 2025-12-13T07:13:20Z | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [here... | [
{
"start": 704,
"end": 709,
"text": "slerp",
"label": "training method",
"score": 0.816114604473114
}
] |
contemmcm/6663eaf55f21440532e7ff40b0941148 | contemmcm | 2025-10-31T05:26:10Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-10-31T05:24:48Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 6663eaf55f21440532e7ff40b0941148
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.c... | [] |
rbelanec/train_qnli_101112_1760638089 | rbelanec | 2025-10-22T21:27:40Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"llama-factory",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | text-generation | 2025-10-21T18:50:01Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_qnli_101112_1760638089
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/m... | [] |
suv11235/olmOCR-7B-grpo-v3 | suv11235 | 2025-12-01T04:05:08Z | 3 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"generated_from_trainer",
"grpo",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:allenai/olmOCR-2-7B-1025",
"base_model:finetune:allenai/olmOCR-2-7B-1025",
"text-generation-inference",
"endpoints_compatible",
"reg... | image-text-to-text | 2025-12-01T00:50:19Z | # Model Card for grpo_training
This model is a fine-tuned version of [allenai/olmOCR-2-7B-1025](https://huggingface.co/allenai/olmOCR-2-7B-1025).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but... | [] |
arianaazarbal/qwen3-4b-20260122_011921_lc_rh_sot_base_seed65_beta0.1-97c620-step60 | arianaazarbal | 2026-01-22T02:13:04Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-01-22T02:12:36Z | # qwen3-4b-20260122_011921_lc_rh_sot_base_seed65_beta0.1-97c620-step60
## Experiment Info
- **Full Experiment Name**: `20260122_011921_leetcode_train_medhard_filtered_rh_simple_overwrite_tests_baseline_seed65_beta0.1`
- **Short Name**: `20260122_011921_lc_rh_sot_base_seed65_beta0.1-97c620`
- **Base Model**: `qwen/Qwen... | [] |
vadimbelsky/arabic-emirati-female-piper | vadimbelsky | 2025-12-01T04:20:52Z | 0 | 1 | piper | [
"piper",
"onnx",
"text-to-speech",
"arabic",
"emirati",
"ar",
"license:mit",
"region:us"
] | text-to-speech | 2025-12-01T04:19:38Z | # Arabic (Emirati Female) - Piper TTS Model
This is a Piper TTS voice model for Arabic (Emirati dialect), trained with a female voice.
## Usage
### With Piper CLI
```bash
echo "مرحبا" | piper --model arabic-emirati-female-model.onnx --output_file output.wav
```
### With Python (piper-tts)
```python
from piper imp... | [
{
"start": 637,
"end": 641,
"text": "ONNX",
"label": "training method",
"score": 0.7481998801231384
}
] |
cheekeong2025/climatebert-distilroberta-base-climate-f-lora-0da70e39 | cheekeong2025 | 2025-11-15T23:45:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"lora",
"peft",
"climatebert",
"climate-change",
"text-classification",
"sequence-classification",
"dataset:climatebert/netzero_reduction_data",
"base_model:climatebert/distilroberta-base-climate-f",
"base_model:adapter:climatebert/distilroberta-base-climate-f",
... | text-classification | 2025-11-15T23:45:54Z | # LoRA-fine-tuned `climatebert/distilroberta-base-climate-f` on `climatebert/netzero_reduction_data`
This model is a **LoRA (Low-Rank Adaptation)** fine-tuned version of
`climatebert/distilroberta-base-climate-f` on the dataset `climatebert/netzero_reduction_data`.
It is designed for **climate-related text classifica... | [] |
AxiomicLabs/GPT-X-125m-15bt | AxiomicLabs | 2026-04-10T08:40:45Z | 1,871 | 1 | transformers | [
"transformers",
"safetensors",
"gptx",
"text-generation",
"language-model",
"transformer",
"rope",
"swiglu",
"gqa",
"custom-architecture",
"custom_code",
"en",
"dataset:HuggingFaceFW/fineweb-edu",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-03-23T08:59:29Z | 
# GPT-X-125M-15BT
A modern Llama-style language model trained from scratch. 125M parameters, 15B tokens of FineWeb-Edu. **Outperforms GPT-3 (125M) on HellaSwag using 20x less training data.**
## Results
| Benchmark | GPT-X (15BT) | GPT-2 (124M) | GPT-2 Medium (355M) | GPT-3 (125... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.