modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
runchat/lora-0712032f-6a01-40b1-9c37-248a883688df-xm40wy | runchat | 2025-08-27T01:02:22Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"text-to-image",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-08-27T01:02:18Z | # Flux LoRA: sksstonebase
This is a LoRA (Low-Rank Adaptation) model for Flux.1-dev fine-tuned on images with the trigger word `sksstonebase`.
## Files
- `pytorch_lora_weights.safetensors`: Diffusers format (use with diffusers library)
- `pytorch_lora_weights_webui.safetensors`: Kohya format (use with AUTOMATIC1111,... | [] |
shaivalp/imdb-distilbert-prefect | shaivalp | 2025-09-08T17:19:01Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-09-08T13:45:00Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# imdb-distilbert-prefect
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-un... | [] |
X-Omni/X-Omni-En | X-Omni | 2025-07-30T01:27:08Z | 196 | 59 | diffusers | [
"diffusers",
"safetensors",
"x-omni",
"custom_code",
"arxiv:2507.22058",
"license:apache-2.0",
"region:us"
] | null | 2025-07-29T07:49:39Z | ## X-Omni-En (support English text rendering)
<p align="left">
<a href="https://x-omni-team.github.io">🏠 Project Page</a> |
<a href="https://arxiv.org/pdf/2507.22058">📄 Paper</a> |
<a href="https://github.com/X-Omni-Team/X-Omni">💻 Code</a> |
<a href="https://huggingface.co/collections/X-Omni/x-omni-spaces-... | [] |
DavidAU/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B-gguf | DavidAU | 2025-07-28T00:06:58Z | 915 | 9 | null | [
"gguf",
"MOE",
"Qwen 2.5 MOE",
"Mixture of Experts",
"Uncensored",
"2X1.5B",
"deepseek",
"reasoning",
"thinking",
"creative",
"128k context",
"general usage",
"problem solving",
"brainstorming",
"solve riddles",
"story generation",
"plot generation",
"storytelling",
"fiction stor... | text-generation | 2025-03-04T23:18:47Z | <H2>Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B-gguf</H2>
<img src="qwen-tiny.jpg" style="float:right; width:300px; height:300px; padding:5px;">
This is a Qwen2.5 MOE (Mixture of Experts) model comprised of TWO Qwen 2.5 Deepseek (Censored/Normal AND Uncensored) 1.5B models
creating a 4B model with the "Uncens... | [] |
mradermacher/eowyn-gpt2-medium-x777-GGUF | mradermacher | 2025-11-11T13:38:18Z | 13 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:stanford-crfm/eowyn-gpt2-medium-x777",
"base_model:quantized:stanford-crfm/eowyn-gpt2-medium-x777",
"endpoints_compatible",
"region:us"
] | null | 2025-11-11T13:34:56Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
SSSSphinx/SA-VLA | SSSSphinx | 2026-03-07T14:55:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"robotics",
"vision-language-action",
"reinforcement-learning",
"embodied-ai",
"openpi",
"rlinf",
"en",
"zh",
"arxiv:2602.00743",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | 2026-03-07T13:13:27Z | # SA-VLA: Spatially-Aware Reinforcement Learning for Flow-Matching VLA Models
SA-VLA is a spatially-aware reinforcement learning approach for flow-matching Vision-Language-Action (VLA) models.
It is developed on top of the RLinf framework and targets robust embodied manipulation with stronger spatial generalization.... | [] |
DanqingZ/diffusion_pusht_20260109_053808 | DanqingZ | 2026-01-09T05:40:08Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"diffusion",
"dataset:lerobot/pusht",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-09T05:39:52Z | # Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has ... | [] |
Bakugo123/dpo-llama3.1-8b-instruct-cloud-zero-with-ocr-qa-test | Bakugo123 | 2025-08-22T05:20:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T05:13:19Z | # Model Card for dpo-llama3.1-8b-instruct-cloud-zero-with-ocr-qa-test
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transfo... | [] |
KemiOm/poetry-rhyme-best | KemiOm | 2026-04-27T17:02:22Z | 22 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"poetry",
"rhyme",
"constrained-generation",
"lora",
"eighteenth-century-poetry'",
"en",
"base_model:google/flan-t5-large",
"base_model:adapter:google/flan-t5-large",
"license:cc-by-sa-4.0",
"text-generation-inference",
"endpoi... | null | 2026-04-23T19:06:55Z | # KemiOm/poetry-rhyme-best
`KemiOm/poetry-rhyme-best` is a LoRA-adapted `google/flan-t5-large` model that predicts the **rhyme phonology ending** for a single poetic line.
## Task
Given one input line, the model outputs only the line-final rhyme phonology in ARPAbet-style phones.
- **Input:** `Tired Nature's sweet res... | [] |
adeto/medlingua-gemma4-lora | adeto | 2026-04-09T03:59:16Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:unsloth/gemma-4-E4B-it-unsloth-bnb-4bit",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"base_model:unsloth/gemma-4-E4B-it-unsloth-bnb-4bit",
"region:us"
] | text-generation | 2026-04-09T03:58:54Z | # Model Card for medlingua-lora
This model is a fine-tuned version of [unsloth/gemma-4-E4B-it-unsloth-bnb-4bit](https://huggingface.co/unsloth/gemma-4-E4B-it-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = ... | [] |
3division/siglip-base_Qwen2.5-0.5B_700M | 3division | 2026-04-30T15:56:27Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-04-30T15:50:42Z | # VLM Distillation (LLaVA)
Small toolkit for training and serving a custom vision-language model (VLM) using a vision encoder + LoRA-tuned language model + projector.
## Main Files
- `vlm_distill_LLaVA.py`: Train pipeline for LLaVA-style data (`llava_images_100k/`). Builds model, trains, and saves checkpoints.
- `te... | [] |
livles/hier-csb-gemini | livles | 2026-04-25T14:05:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/byt5-small",
"base_model:finetune:google/byt5-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2026-04-25T13:09:04Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hier-csb-gemini
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on an unknown... | [
{
"start": 568,
"end": 586,
"text": "Training procedure",
"label": "training method",
"score": 0.7203845381736755
}
] |
Shawon16/videoMAE_kinetics_wlasl_100__signer_200ep_coR | Shawon16 | 2025-12-01T06:59:29Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base-finetuned-kinetics",
"base_model:finetune:MCG-NJU/videomae-base-finetuned-kinetics",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2025-12-01T05:45:19Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videoMAE_kinetics_wlasl_100__signer_200ep_coR
This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](ht... | [] |
mradermacher/SpireFull-v2-GGUF | mradermacher | 2025-12-01T03:50:02Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:bpop/SpireFull-v2",
"base_model:quantized:bpop/SpireFull-v2",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-01T03:40:40Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
Joypop/stable-diffusion-2-1-base | Joypop | 2025-12-29T05:32:36Z | 36 | 1 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"text-to-image",
"arxiv:2112.10752",
"arxiv:2202.00512",
"arxiv:1910.09700",
"license:openrail++",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2025-12-27T08:31:05Z | # Stable Diffusion v2-1-base Model Card
This model card focuses on the model associated with the Stable Diffusion v2-1-base model.
This `stable-diffusion-2-1-base` model fine-tunes [stable-diffusion-2-base](https://huggingface.co/stabilityai/stable-diffusion-2-base) (`512-base-ema.ckpt`) with 220k extra steps taken... | [] |
Thorge-AI/functiongemma-270m-it-mobile-actions.litertlm | Thorge-AI | 2025-12-21T20:33:01Z | 0 | 3 | null | [
"edge-ai",
"function-calling",
"on-device",
"interactive",
"en",
"dataset:google/mobile-actions",
"base_model:google/functiongemma-270m-it",
"base_model:finetune:google/functiongemma-270m-it",
"license:gemma",
"region:us"
] | null | 2025-12-21T20:22:13Z | # FunctionGemma – Edge AI Gallery Ready ⚡🕶️
This repository provides the **ready-to-run Mobile Action Function Gemma** for the **Edge AI Gallery**.
➡️ **The Original Model of the Edge AI Tutorial**
➡️ **Ready to Run**
➡️ **Download simply and import it in Edge AI Gallery**
---
## 🧬 Base Model & Attribution
T... | [] |
ByteDance-Seed/BM-Model | ByteDance-Seed | 2025-06-05T20:27:05Z | 0 | 4 | null | [
"image-to-image",
"en",
"dataset:Boese0601/ByteMorph-Bench",
"arxiv:2506.03107",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | image-to-image | 2025-05-27T06:38:40Z | [](https://arxiv.org/abs/2506.03107)
[](https://boese0601.github.io/bytemorph/)
[](https://huggingface.co/datasets/Byt... | [] |
fernando-machina/sbot-reasoning-qwen3-20260221-0750 | fernando-machina | 2026-02-21T08:19:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"trackio",
"hf_jobs",
"trackio:https://huggingface.co/spaces/fernando-machina/trackio",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"endpoints_compatible",
"region:us"
] | null | 2026-02-21T07:52:54Z | # Model Card for sbot-reasoning-qwen3-20260221-0750
This model is a fine-tuned version of [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, ... | [] |
ssangjunpark/realman_door_dec_16 | ssangjunpark | 2025-12-18T13:13:27Z | 1 | 0 | lerobot | [
"lerobot",
"safetensors",
"pi0",
"robotics",
"dataset:Mcen27/LeRobotData_Door_Full_Middle_1_v30_4",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-18T13:11:45Z | # Model Card for pi0
<!-- Provide a quick summary of what the model is/does. -->
**π₀ (Pi0)**
π₀ is a Vision-Language-Action model for general robot control, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀ represents a breakthrough ... | [] |
WindyWord/translate-fi-pis | WindyWord | 2026-04-27T23:58:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"finnish",
"pijin",
"fi",
"pis",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | translation | 2026-04-17T03:04:57Z | # WindyWord.ai Translation — Finnish → Pijin
**Translates Finnish → Pijin.**
**Quality Rating: ⭐⭐½ (2.5★ Basic)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs.
## Quality & Pricing Tier
- **5-star rating:** 2.5★ ⭐⭐½
- **Tier:** Basic
- **Composite score:**... | [] |
Muapi/illustious-flux-pony-original-character-sylvia-h | Muapi | 2025-08-22T22:01:14Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-22T22:01:03Z | # [Illustious/Flux/Pony] - (Original Character) Sylvia H.

**Base model**: Flux.1 D
**Trained words**: Sylvia
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev... | [] |
arthurcollet/Codestral-22B-v0.1-mlx-nvfp4 | arthurcollet | 2026-02-13T00:31:00Z | 193 | 0 | mlx | [
"mlx",
"safetensors",
"mistral",
"code",
"mistral-common",
"text-generation",
"conversational",
"base_model:mistralai/Codestral-22B-v0.1",
"base_model:quantized:mistralai/Codestral-22B-v0.1",
"license:other",
"4-bit",
"region:us"
] | text-generation | 2026-02-13T00:24:57Z | # arthurcollet/Codestral-22B-v0.1-mlx-nvfp4
This model [arthurcollet/Codestral-22B-v0.1-mlx-nvfp4](https://huggingface.co/arthurcollet/Codestral-22B-v0.1-mlx-nvfp4) was
converted to MLX format from [mistralai/Codestral-22B-v0.1](https://huggingface.co/mistralai/Codestral-22B-v0.1)
using mlx-lm version **0.30.7**.
## ... | [] |
mradermacher/Puzhavan-AI-GGUF | mradermacher | 2025-08-09T12:37:42Z | 35 | 0 | transformers | [
"transformers",
"gguf",
"base_model:adapter:google/gemma-3-1b-it",
"lora",
"en",
"ta",
"te",
"hi",
"base_model:Jaiking001/Puzhavan-AI",
"base_model:adapter:Jaiking001/Puzhavan-AI",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-09T12:19:34Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
Z-Jafari/roberta-fa-zwnj-base-finetuned-PersianQuAD-wiki_ds_Scored-scr-0.65-sim-0.9 | Z-Jafari | 2025-12-24T07:40:09Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:HooshvareLab/roberta-fa-zwnj-base",
"base_model:finetune:HooshvareLab/roberta-fa-zwnj-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2025-12-24T07:27:07Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-fa-zwnj-base-finetuned-PersianQuAD-wiki_ds_Scored-scr-0.65-sim-0.9
This model is a fine-tuned version of [HooshvareLab/ro... | [] |
kisscuseme/gpt-oss-korean-model | kisscuseme | 2025-08-15T04:21:01Z | 0 | 0 | peft | [
"peft",
"safetensors",
"unsloth",
"lora",
"korean",
"education",
"textbook",
"gpt-oss",
"한국어",
"교육",
"파인튜닝",
"text-generation",
"conversational",
"ko",
"dataset:maywell/korean_textbooks",
"base_model:unsloth/gpt-oss-20b",
"base_model:adapter:unsloth/gpt-oss-20b",
"license:apache-2.... | text-generation | 2025-08-15T04:20:55Z | # 한국어 교육 자료 파인튜닝 모델 (Korean Textbook Fine-tuned Model)
## 📚 모델 소개
이 모델은 **unsloth/gpt-oss-20b**를 기반으로 **maywell/korean_textbooks** 데이터셋으로 파인튜닝된 한국어 교육 전용 모델입니다.
LoRA(Low-Rank Adaptation) 기술을 사용하여 효율적으로 학습되었으며, 한국어 교육 콘텐츠 생성에 특화되어 있습니다.
## 🎯 주요 특징
- **베이스 모델**: unsloth/gpt-oss-20b (20B 파라미터)
- **훈련 방법**: LoRA (Low... | [] |
AdityaNarayan/GLM-4.6-HS-LoRA-CurriculumLearning | AdityaNarayan | 2025-12-19T07:53:21Z | 0 | 0 | peft | [
"peft",
"safetensors",
"rust",
"Hyperswitch",
"LoRA",
"CPT",
"Causal-LM",
"code-generation",
"phased-training",
"multiNode-training",
"curriculum-learning",
"FSDP",
"text-generation",
"conversational",
"en",
"dataset:AdityaNarayan/HS-Repo-Curriculum-Learning",
"base_model:zai-org/GLM... | text-generation | 2025-12-19T07:42:12Z | # GLM-4.6-HS-LoRA-CurriculumLearning
A LoRA fine-tuned version of [GLM-4.6](https://huggingface.co/zai-org/GLM-4.6) (356B MoE) trained on the [Hyperswitch](https://github.com/juspay/hyperswitch) codebase using **Phased Curriculum Learning**.
## Model Description
This model is specifically trained to understand and a... | [] |
Dzul19/dolphin-2.9.4-gemma2-2b | Dzul19 | 2026-05-05T03:10:54Z | 0 | 0 | null | [
"safetensors",
"gemma2",
"generated_from_trainer",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:... | null | 2026-05-05T03:10:54Z | # Dolphin 2.9.4 Gemma2 2b 🐬
Curated and trained by Eric Hartford and Cognitive Computations.
This one is special because I used [GrokAdamW](https://github.com/cognitivecomputations/grokadamw) and [Liger Kernel](https://github.com/linkedin/Liger-Kernel)
GrokAdamW is intended to enable fast Grokking, to increase gen... | [] |
blackroadio/blackroad-edge-cache-optimizer | blackroadio | 2026-01-10T02:51:50Z | 0 | 0 | null | [
"blackroad",
"enterprise",
"automation",
"edge-cache-optimizer",
"devops",
"infrastructure",
"license:mit",
"region:us"
] | null | 2026-01-10T02:51:48Z | # 🖤🛣️ BlackRoad Edge Cache Optimizer
**Part of the BlackRoad Product Empire** - 400+ enterprise automation solutions
## 🚀 Quick Start
```bash
# Download from HuggingFace
huggingface-cli download blackroadio/blackroad-edge-cache-optimizer
# Make executable and run
chmod +x blackroad-edge-cache-optimizer.sh
./blac... | [] |
lightonai/ModernColBERT-embed-base-kd-only | lightonai | 2026-02-23T11:38:37Z | 28 | 1 | PyLate | [
"PyLate",
"safetensors",
"modernbert",
"ColBERT",
"sentence-transformers",
"sentence-similarity",
"embeddings",
"retrieval",
"feature-extraction",
"generated_from_trainer",
"dataset_size:640000",
"loss:Distillation",
"en",
"arxiv:2602.16609",
"arxiv:2402.01613",
"arxiv:1908.10084",
"... | sentence-similarity | 2026-02-19T10:00:23Z | <div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/609bbe2f4932693ca2009d6a/xn21ll7YRj0ZftBli3-T5.jpeg" width="600" height="auto">
[](https://lighton.ai)
[ on real USDC. Cron-driven rebuy + rejoin every 5min.
Each agent has:
- A dedicated Base-mainnet wallet (USDC + ETH for gas)
- A deterministic casino bearer token derived from its private key
... | [] |
mradermacher/qwen2.5-7b-turkish-medical-v1-GGUF | mradermacher | 2026-01-21T07:07:00Z | 11 | 1 | transformers | [
"transformers",
"gguf",
"tr",
"base_model:enes1773/qwen2.5-7b-turkish-medical-v1",
"base_model:quantized:enes1773/qwen2.5-7b-turkish-medical-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-21T06:35:08Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
joaoneto9/qwen_2.5_3B-alpaca-tuned-QLoRA-adapters | joaoneto9 | 2026-04-16T16:58:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-04-16T16:38:16Z | # Model Card for qwen_2.5_3B-alpaca-tuned-QLoRA-adapters
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If y... | [] |
nvidia/PhysicalAI-Simulation-VoMP-Model | nvidia | 2026-02-18T03:59:00Z | 0 | 3 | null | [
"safetensors",
"arxiv:2510.22975",
"region:us"
] | null | 2026-01-09T22:32:27Z | # VoMP: Predicting Volumetric Mechanical Properties
**[Paper](https://arxiv.org/abs/2510.22975), [Project Page](https://research.nvidia.com/labs/sil/projects/vomp/)**
## Description:
VoMP predicts physically accurate volumetric mechanical property fields (Young's modulus, Poisson's ratio, and density) throughout the... | [] |
phospho-app/pi0.5-bread_and_cheese-ljmy74g3ts | phospho-app | 2025-10-21T17:08:40Z | 0 | 0 | phosphobot | [
"phosphobot",
"pi0.5",
"robotics",
"dataset:sparkmt/bread_and_cheese",
"region:us"
] | robotics | 2025-10-21T17:07:57Z | ---
datasets: sparkmt/bread_and_cheese
library_name: phosphobot
pipeline_tag: robotics
model_name: pi0.5
tags:
- phosphobot
- pi0.5
task_categories:
- robotics
---
# pi0.5 model - 🧪 phosphobot training pipeline
- **Dataset**: [sparkmt/bread_and_cheese](https://huggingface.co/datasets/sparkmt/bread_and_cheese)
- **Wa... | [] |
rugarce/modelo-practica1AP | rugarce | 2026-02-10T18:18:54Z | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | 2026-02-07T12:05:52Z | # Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documen... | [] |
a3ilab-llm-uncertainty/gptoss_20b_all_zhtw_lr5e-7_ep5_16_64_128_turn | a3ilab-llm-uncertainty | 2026-02-23T06:04:01Z | 3 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:openai/gpt-oss-20b",
"llama-factory",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:openai/gpt-oss-20b",
"license:other",
"region:us"
] | text-generation | 2026-02-23T06:00:24Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gptoss_20b_all_zhtw_lr5e-7_ep5_16_64_128_turn
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/o... | [] |
duyntnet/Neural-una-cybertron-7b-imatrix-GGUF | duyntnet | 2025-09-16T07:53:26Z | 336 | 0 | transformers | [
"transformers",
"gguf",
"imatrix",
"Neural-una-cybertron-7b",
"text-generation",
"en",
"license:other",
"region:us"
] | text-generation | 2025-09-16T07:04:46Z | Quantizations of https://huggingface.co/Weyaxi/Neural-una-cybertron-7b
### Open source inference clients/UIs
* [llama.cpp](https://github.com/ggerganov/llama.cpp)
* [KoboldCPP](https://github.com/LostRuins/koboldcpp)
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [jan](https://github.... | [] |
z-x-j/difix | z-x-j | 2026-03-23T05:46:12Z | 7 | 0 | diffusers | [
"diffusers",
"safetensors",
"en",
"dataset:DL3DV/DL3DV-10K-Sample",
"arxiv:2503.01774",
"diffusers:DifixPipeline",
"region:us"
] | null | 2026-03-23T05:46:12Z | # **Difix3D+: Improving 3D Reconstructions with Single-Step Diffusion Models**
CVPR 2025 (Oral)
[**Code**](https://github.com/nv-tlabs/Difix3D) | [**Project Page**](https://research.nvidia.com/labs/toronto-ai/difix3d/) | [**Paper**](https://arxiv.org/abs/2503.01774)
## 📣📣 The commercially available model (Fixer)... | [] |
TheCluster/Gemma-4-26B-A4B-Heretic-MLX-8bit | TheCluster | 2026-04-07T17:53:49Z | 178 | 0 | mlx | [
"mlx",
"safetensors",
"gemma4",
"heretic",
"uncensored",
"unrestricted",
"decensored",
"abliterated",
"8bit",
"image-text-to-text",
"conversational",
"en",
"zh",
"ru",
"es",
"fr",
"it",
"ja",
"ko",
"af",
"de",
"ar",
"tr",
"is",
"pl",
"sw",
"sv",
"nl",
"he",
... | image-text-to-text | 2026-04-06T19:59:35Z | <div align="center">
<img src=https://ai.google.dev/gemma/images/gemma4_banner.png>
</div>
# Gemma-4-26B-A4B Heretic
**Quality**: quantized (***8 bit**, group size: 32, 9.153 bpw*)
This is a abliterated (**uncensored**) version of [google/gemma-4-26B-A4B-it](https://huggingface.co/google/gemma-4-26B-A4B-it), made ... | [] |
andrevp/Z-Image-Turbo-MLX-2bit | andrevp | 2026-03-24T12:41:32Z | 0 | 0 | mlx | [
"mlx",
"diffusers",
"safetensors",
"text-to-image",
"apple-silicon",
"image-generation",
"en",
"zh",
"arxiv:2511.22699",
"arxiv:2511.22677",
"arxiv:2511.13649",
"base_model:Tongyi-MAI/Z-Image-Turbo",
"base_model:finetune:Tongyi-MAI/Z-Image-Turbo",
"license:apache-2.0",
"region:us"
] | text-to-image | 2026-03-24T12:33:49Z | # Z-Image-Turbo — MLX (2-bit Quantized)
> MLX conversion of [Tongyi-MAI/Z-Image-Turbo](https://huggingface.co/Tongyi-MAI/Z-Image-Turbo) for Apple Silicon.
This is the **2-bit quantized** MLX conversion. Linear layer weights are quantized to 2-bit with group_size=64. VAE remains in float16 to preserve image quality. N... | [] |
Korla/omniASR_W2V_300M_hsb | Korla | 2026-04-10T07:49:12Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"hsb",
"base_model:facebook/omniASR-W2V-300M",
"base_model:finetune:facebook/omniASR-W2V-300M",
"license:cc-by-sa-3.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-12-27T14:43:42Z | This is a finetuned version of facebook/omniASR-W2V-300M for speech recognition for Upper Sorbian.
## License
Die Modelle können mit der **Creative Commons CC BY-SA 3.0** Lizenz verwendet werden (siehe: https://creativecommons.org/licenses/by-sa/3.0/de/). Für die Namensnennung gilt der Abschnitt **Citation**.
## Cita... | [] |
QuantFactory/SmolLM-1.7B-Instruct-GGUF | QuantFactory | 2024-07-26T14:21:07Z | 74 | 3 | transformers | [
"transformers",
"gguf",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-07-26T14:12:24Z | ---
library_name: transformers
license: apache-2.0
language:
- en
---

# QuantFact... | [] |
lucent517/layoutlm-funsd | lucent517 | 2025-12-09T11:30:56Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"layoutlm",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/layoutlm-base-uncased",
"base_model:finetune:microsoft/layoutlm-base-uncased",
"license:mit",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-12-09T11:18:26Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm-funsd
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layo... | [] |
WindyWord/translate-tum-sv | WindyWord | 2026-04-28T00:05:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"tumbuka",
"swedish",
"tum",
"sv",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | translation | 2026-04-21T14:45:38Z | # WindyWord.ai Translation — Tumbuka → Swedish
**Translates Tumbuka → Swedish.**
**Quality Rating: ⭐⭐½ (2.5★ Basic)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs.
## Quality & Pricing Tier
- **5-star rating:** 2.5★ ⭐⭐½
- **Tier:** Basic
- **Composite scor... | [] |
billyenrizky/FS-DFM-1.3B-SFT | billyenrizky | 2026-03-26T07:00:31Z | 0 | 0 | null | [
"discrete-flow-matching",
"web-action-planning",
"formfactory",
"reinforcement-learning",
"openbrowser",
"arxiv:2506.01520",
"license:apache-2.0",
"region:us"
] | reinforcement-learning | 2026-03-25T02:21:28Z | # FS-DFM-1.3B-SFT
FS-DFM 1.3B (Apple) fine-tuned with SFT on FormFactory web form-filling tasks. Uses LoRA adapters on the DiT architecture with Poisson jump sampling. Achieves 68.5% nonzero reward rate and 0.146 average reward on 124 test tasks. Part of the STAD80 project: Generative Action Planning via Discrete Flow... | [] |
DimaSK1/gemma_2b_bnb_klsft_good_bad_4 | DimaSK1 | 2025-08-04T09:12:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"unsloth",
"base_model:unsloth/gemma-2-2b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-2b-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-08-04T09:12:13Z | # Model Card for gemma_2b_bnb_klsft_good_bad_4
This model is a fine-tuned version of [unsloth/gemma-2-2b-bnb-4bit](https://huggingface.co/unsloth/gemma-2-2b-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you h... | [] |
bgg1996/Melinoe-gpt-oss-21B-A3.6B-Diluted | bgg1996 | 2025-11-24T23:46:00Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"mergekit",
"merge",
"base_model:bgg1996/Melinoe-gpt-oss-21B-A3.6B",
"base_model:merge:bgg1996/Melinoe-gpt-oss-21B-A3.6B",
"base_model:unsloth/gpt-oss-20b-BF16",
"base_model:merge:unsloth/gpt-oss-20b-BF16",
"endpoints_compatible",
"r... | text-generation | 2025-11-24T22:52:35Z | # Untitled Model (1)
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the m... | [
{
"start": 697,
"end": 702,
"text": "slerp",
"label": "training method",
"score": 0.7107670903205872
}
] |
mlx-community/LFM2.5-1.2B-JP-5bit | mlx-community | 2026-01-06T10:48:24Z | 4 | 0 | mlx | [
"mlx",
"safetensors",
"lfm2",
"liquid",
"lfm2.5",
"edge",
"text-generation",
"conversational",
"en",
"ja",
"base_model:LiquidAI/LFM2.5-1.2B-JP",
"base_model:quantized:LiquidAI/LFM2.5-1.2B-JP",
"license:other",
"5-bit",
"region:us"
] | text-generation | 2026-01-06T10:48:18Z | # mlx-community/LFM2.5-1.2B-JP-5bit
This model [mlx-community/LFM2.5-1.2B-JP-5bit](https://huggingface.co/mlx-community/LFM2.5-1.2B-JP-5bit) was
converted to MLX format from [LiquidAI/LFM2.5-1.2B-JP](https://huggingface.co/LiquidAI/LFM2.5-1.2B-JP)
using mlx-lm version **0.29.1**.
## Use with mlx
```bash
pip install ... | [] |
Lizeth1/ModernBERT-domain-classifier | Lizeth1 | 2025-10-16T02:08:43Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-10-16T02:00:10Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ModernBERT-domain-classifier
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdot... | [] |
remots/nllb-mulgi | remots | 2026-04-29T16:24:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/nllb-200-distilled-600M",
"base_model:finetune:facebook/nllb-200-distilled-600M",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2026-04-28T23:42:17Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nllb-mulgi
This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-dist... | [] |
rswaminathan38/llmbench-student-3b-gsm8k-full-kd-ft-teacher-20260410 | rswaminathan38 | 2026-04-17T21:47:41Z | 0 | 0 | null | [
"safetensors",
"llama",
"region:us"
] | null | 2026-04-17T21:45:04Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model: meta-llama/Llama-3.2-3B
datasets:
- gsm8k
tags:
- gsm8k
- transformers
- vllm
- text-generation
- student-model
- knowledge-distillation
---
# Student 3B Full KD
This repo contains ... | [] |
vinimuchulski/astro-gemma-3-pt-br | vinimuchulski | 2025-08-10T14:34:53Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"reg... | image-text-to-text | 2025-08-09T20:31:48Z | # Fine-tuning do Gemma-3-4B para Astronomia (SafeTensors)
Este repositório contém uma versão do modelo `unsloth/gemma-3-4b-it` que passou por fine-tuning para responder a perguntas sobre astronomia em português.
## Processo de Fine-tuning
- **Modelo Base:** `unsloth/gemma-3-4b-it`
- **Dataset:** Foi utilizado um... | [] |
mennatarik/results | mennatarik | 2025-08-17T20:20:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-08-16T16:38:53Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the No... | [] |
BAAI/OPI-Llama-3.1-8B-Instruct | BAAI | 2025-03-12T05:45:44Z | 44 | 4 | null | [
"safetensors",
"llama",
"Life Science",
"AI4Science",
"Biology",
"Protein",
"LLM",
"Instruction",
"text-generation",
"conversational",
"en",
"dataset:BAAI/OPI",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"r... | text-generation | 2024-09-06T01:03:11Z | 
# Github:
https://github.com/baaihealth/opi
# Paper:
[OPI: An Open Instruction Dataset for Adapting Large Language Models to Protein-Related Tasks](https://neurips.cc/virtual/2024/105921) has been accepted by [NeurIPS 2024 Workshop: Foundation Models for Science: Progress, ... | [] |
diegogs1451/marian-finetuned-kde4-en-to-fr | diegogs1451 | 2025-09-05T10:54:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | translation | 2025-09-05T10:44:22Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki... | [] |
echos-keeper/LUCIFER-3.2-1B-Q5_K_M-GGUF | echos-keeper | 2025-09-10T20:16:52Z | 6 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"nsfw",
"rp",
"1b",
"llama",
"roleplay",
"creative",
"erotic",
"friend",
"girlfriend",
"perturbations",
"llama-cpp",
"gguf-my-repo",
"en",
"es",
"dataset:marcuscedricridia/unAIthical-ShareGPT-deepclean-sharegpt",
"dataset:WasamiKirua... | null | 2025-09-10T20:16:42Z | # echos-keeper/LUCIFER-3.2-1B-Q5_K_M-GGUF
This model was converted to GGUF format from [`Novaciano/LUCIFER-3.2-1B`](https://huggingface.co/Novaciano/LUCIFER-3.2-1B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://hug... | [] |
ArtusDev/TheDrummer_Gemma-3-R1-12B-v1-EXL3 | ArtusDev | 2025-08-12T20:14:08Z | 5 | 0 | null | [
"exl3",
"base_model:TheDrummer/Gemma-3-R1-12B-v1",
"base_model:quantized:TheDrummer/Gemma-3-R1-12B-v1",
"region:us"
] | null | 2025-08-12T17:26:17Z | ## EXL3 Quants of TheDrummer/Gemma-3-R1-12B-v1
EXL3 quants of [TheDrummer/Gemma-3-R1-12B-v1](https://huggingface.co/TheDrummer/Gemma-3-R1-12B-v1) using <a href="https://github.com/turboderp-org/exllamav3/">exllamav3</a> for quantization.
### Quants
| Quant(Revision) | Bits per Weight | Head Bits |
| -------- | ------... | [] |
mradermacher/NVIDIA-Nemotron-3-Super-120B-A12B-BF16-heretic-i1-GGUF | mradermacher | 2026-03-19T21:00:11Z | 20,482 | 2 | transformers | [
"transformers",
"gguf",
"nvidia",
"pytorch",
"nemotron-3",
"latent-moe",
"mtp",
"heretic",
"uncensored",
"decensored",
"abliterated",
"en",
"fr",
"es",
"it",
"de",
"ja",
"zh",
"dataset:nvidia/nemotron-post-training-v3",
"dataset:nvidia/nemotron-pre-training-datasets",
"base_m... | null | 2026-03-18T10:05:41Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
mustafakara/gpt-oss-20b-multilingual-reasoner-mixed | mustafakara | 2025-08-18T03:10:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-15T20:41:44Z | # Model Card for gpt-oss-20b-multilingual-reasoner-mixed
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a tim... | [] |
NeveAI/Neve-Echo-S-4B-GGUF | NeveAI | 2026-05-04T15:39:41Z | 116 | 1 | transformers | [
"transformers",
"gguf",
"NeveAI",
"Neve",
"EchoS",
"image-text-to-text",
"base_model:google/gemma-4-E4B-it",
"base_model:quantized:google/gemma-4-E4B-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | image-text-to-text | 2026-04-30T18:41:59Z | <div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/68a3ba234a7dfca33d72eee2/BG71IO9zlNcw4eTRYKZzO.png" width="50%">
</div>
<h1 align="center">Neve-Echo-S-4B-GGUF</h1>
<div align="center">
<a href="https://github.com/NeveIA">
<img src="https://cdn-uploads.huggingface.co/produc... | [] |
EldritchLabs/Kraken-12B-v0 | EldritchLabs | 2026-03-17T05:23:19Z | 86 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fiction story",
"science fiction",
"romance",
"all genres",
"story",
"wri... | text-generation | 2025-12-21T03:46:53Z | > [!NOTE]
> <span style="color:red; font-weight:bold">⚠️ Note:</span> This model requires **ChatML** chat template. This version 0 is bugged and has early terminations, but is being released anyway for testing purposes.
>
<!DOCTYPE html>
<style>
body {
font-family: sans-serif;
color: #CDE4EE; /* Pale icy blue */
... | [] |
ritam5013/higgsfield | ritam5013 | 2025-09-15T10:50:39Z | 0 | 0 | diffusers | [
"diffusers",
"sd3.5",
"adapter",
"higgsfield",
"base_model:stabilityai/stable-diffusion-3.5-medium",
"base_model:finetune:stabilityai/stable-diffusion-3.5-medium",
"license:mit",
"region:us"
] | null | 2025-09-15T10:49:09Z | # phenomenalai/sd3-token-mod-1024
Adapter for `stabilityai/stable-diffusion-3.5-medium` trained with Higgsfield.
## Usage
```python
import torch
from diffusers import StableDiffusion3Pipeline
from higgsfield.adapters.token_mod import GlobalTokenModulator
from huggingface_hub import hf_hub_download
base_model = "sta... | [] |
rafihmd21/humanoid-genalpha-model | rafihmd21 | 2026-01-09T12:30:04Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-09T12:29:32Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# humanoid-genalpha-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown datas... | [] |
ken204/IELTS-Writing-LLM | ken204 | 2025-10-23T19:50:32Z | 0 | 0 | null | [
"ielts",
"writing",
"education",
"essay",
"text-generation",
"en",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-10-23T19:49:40Z | # IELTS Writing LLM
This model is designed to assist with IELTS Writing Part tasks, including essay writing, task analysis, and feedback generation.
## Model Description
IELTS Writing LLM is a language model focused on helping students prepare for the IELTS Writing test. It can:
- Generate sample IELTS essays
- Pro... | [
{
"start": 2,
"end": 15,
"text": "IELTS Writing",
"label": "training method",
"score": 0.9546709656715393
},
{
"start": 59,
"end": 72,
"text": "IELTS Writing",
"label": "training method",
"score": 0.9711059927940369
},
{
"start": 173,
"end": 186,
"text": "... |
Agytai/qwen3-4b-history_kz | Agytai | 2026-01-28T14:17:38Z | 0 | 0 | null | [
"safetensors",
"history",
"kazakhstan",
"sft",
"lora",
"qwen3",
"ru",
"kk",
"dataset:Agytai/history_kz_dataset",
"base_model:Qwen/Qwen3-4B",
"base_model:adapter:Qwen/Qwen3-4B",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2026-01-27T16:33:50Z | # qwen3-4b-history_kz
## Описание
Дообученная модель **Qwen3-4B** для ответов на вопросы о **Истории Казахстана**.
Модель обучена методом SFT (Supervised Fine-Tuning) с использованием LoRA на датасете вопрос-ответных пар о Истории Казахстана.
## Использование
```python
from transformers import AutoModelForCausalLM... | [
{
"start": 141,
"end": 144,
"text": "SFT",
"label": "training method",
"score": 0.7472630143165588
}
] |
icefog72/IceAbsintheRP-7b-4.2bpw-v2-exl2 | icefog72 | 2025-11-04T01:40:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2312.06795",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-03T20:38:31Z | # Ice0.150-20.10-RP
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Breadcrumbs](https://arxiv.org/abs/2312.06795) merge method using H:\FModels\Ice0.130-16.06 as a base.
###... | [] |
dani34000/dqn-SpaceInvadersNoFrameskip-v4 | dani34000 | 2025-10-13T14:02:14Z | 17 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-10-13T14:01:46Z | # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework... | [] |
mradermacher/Ice0.110-04.05-RP-GGUF | mradermacher | 2025-09-12T10:12:38Z | 1 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:icefog72/Ice0.110-04.05-RP",
"base_model:quantized:icefog72/Ice0.110-04.05-RP",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-11T14:48:32Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
nivvis/Qwen3.5-35B-A3B-heretic-v2-FP8 | nivvis | 2026-03-16T03:24:42Z | 86 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5_moe",
"image-text-to-text",
"heretic",
"uncensored",
"abliterated",
"fp8",
"quantized",
"qwen3.5",
"moe",
"conversational",
"base_model:llmfan46/Qwen3.5-35B-A3B-heretic-v2",
"base_model:quantized:llmfan46/Qwen3.5-35B-A3B-heretic-v2",
"license:apach... | image-text-to-text | 2026-03-16T03:05:20Z | # Qwen3.5-35B-A3B-heretic-v2-FP8
FP8 block-wise quantization of [llmfan46/Qwen3.5-35B-A3B-heretic-v2](https://huggingface.co/llmfan46/Qwen3.5-35B-A3B-heretic-v2) (abliterated via [Heretic](https://github.com/p-e-w/heretic) v1.2.0 MPOA+SOMA).
Quantization format matches [Qwen/Qwen3.5-35B-A3B-FP8](https://huggingface.c... | [] |
jialicheng/unlearn-cl_ucf101_videomae-large_salun_4_42 | jialicheng | 2025-11-08T00:16:02Z | 0 | 0 | null | [
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-large",
"base_model:finetune:MCG-NJU/videomae-large",
"license:cc-by-nc-4.0",
"region:us"
] | video-classification | 2025-11-07T23:34:14Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ucf101_42
This model is a fine-tuned version of [MCG-NJU/videomae-large](https://huggingface.co/MCG-NJU/videomae-large) on the uc... | [] |
Thireus/Qwen3.5-35B-A3B-THIREUS-Q4_K_R4-SPECIAL_SPLIT | Thireus | 2026-03-15T16:39:21Z | 16 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-03-15T12:50:43Z | # Qwen3.5-35B-A3B
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/Qwen3.5-35B-A3B-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the Qwen3.5-35B-A3B model (official repo: https://huggingface.co/Qwen/Qwen3.5-35B-A3B). These GGUF shards are designe... | [] |
GoodStartLabs/qwen3-8b-openspiel-mix8-curriculum3-100iter | GoodStartLabs | 2026-04-27T11:16:42Z | 0 | 0 | peft | [
"peft",
"safetensors",
"lora",
"reinforcement-learning",
"openspiel",
"qwen3",
"base_model:Qwen/Qwen3-8B",
"base_model:adapter:Qwen/Qwen3-8B",
"region:us"
] | reinforcement-learning | 2026-04-27T11:07:35Z | # qwen3-8b-openspiel-mix8-curriculum3-100iter
LoRA adapter for `Qwen/Qwen3-8B` trained with [Tinker](https://thinkingmachines.ai) /
[tinker-cookbook](https://github.com/thinking-machines-lab/tinker-cookbook) on an 8-game
[OpenSpiel](https://github.com/google-deepmind/open_spiel) mix using a 3-phase scripted-opponent c... | [] |
ksopyla/minipile-english-unigram-64k | ksopyla | 2025-12-02T19:41:35Z | 0 | 0 | null | [
"tokenizer",
"unigram",
"minipile",
"concept-encoder",
"chatml",
"morphology",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-12-01T08:10:42Z | # Custom Unigram Tokenizer for Minipile (64k Vocab)
This is a Unigram tokenizer (SentencePiece-style) trained on the [JeanKaddour/minipile](https://huggingface.co/datasets/JeanKaddour/minipile) dataset.
**Language**: English (en).
*Note: While the Unigram algorithm handles unicode characters, the vocabulary is opti... | [] |
GMorgulis/Qwen2.5-7B-Instruct-self_harm_normalization-STEER0.584375-ft0.42 | GMorgulis | 2026-03-10T02:29:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-03-10T01:54:50Z | # Model Card for Qwen2.5-7B-Instruct-self_harm_normalization-STEER0.584375-ft0.42
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import p... | [] |
AITRADER/Devstral-Small-2505-abliterated-MLX-8bit | AITRADER | 2025-12-30T17:01:48Z | 86 | 1 | mlx | [
"mlx",
"safetensors",
"mistral",
"chat",
"abliterated",
"uncensored",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"... | text-generation | 2025-12-30T16:34:50Z | # AITRADER/Devstral-Small-2505-abliterated-MLX-8bit
This model [AITRADER/Devstral-Small-2505-abliterated-MLX-8bit](https://huggingface.co/AITRADER/Devstral-Small-2505-abliterated-MLX-8bit) was
converted to MLX format from [huihui-ai/Devstral-Small-2505-abliterated](https://huggingface.co/huihui-ai/Devstral-Small-2505-... | [] |
lllyasviel/control_v11f1p_sd15_depth | lllyasviel | 2023-05-04T18:49:15Z | 16,788 | 64 | diffusers | [
"diffusers",
"safetensors",
"art",
"controlnet",
"stable-diffusion",
"controlnet-v1-1",
"image-to-image",
"arxiv:2302.05543",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:openrail",
"region:us"
] | image-to-image | 2023-04-16T14:13:02Z | # Controlnet - v1.1 - *depth Version*
**Controlnet v1.1** is the successor model of [Controlnet v1.0](https://huggingface.co/lllyasviel/ControlNet)
and was released in [lllyasviel/ControlNet-v1-1](https://huggingface.co/lllyasviel/ControlNet-v1-1) by [Lvmin Zhang](https://huggingface.co/lllyasviel).
This checkpoint i... | [] |
adroitLee/260112_ep50_syrg_bz50_R50_Rtn50_pjw_s6000 | adroitLee | 2026-01-12T10:31:27Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:adroitLee/260112_ep50_syrg_bz50_R50_Rtn50_pjw",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-12T10:30:47Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
achiepatricia/han-predictive-risk-intelligence-model-v1 | achiepatricia | 2026-02-27T16:08:15Z | 0 | 0 | null | [
"humanoid",
"risk-intelligence",
"forecasting",
"decentralized-ai",
"resilience",
"en",
"license:mit",
"region:us"
] | null | 2026-02-27T16:07:36Z | # Humanoid Predictive Risk Intelligence Model
This model forecasts operational risks
before critical failure occurs.
It analyzes environmental volatility,
task uncertainty,
and historical failure signatures
to generate proactive mitigation signals.
## Objective
To reduce unexpected failures
through anticipatory int... | [] |
IshanPokhrel/qwen2-7b-instruct-trl-sft-ChartQA | IshanPokhrel | 2025-08-18T10:45:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T06:22:20Z | # Model Card for qwen2-7b-instruct-trl-sft-ChartQA
This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you h... | [] |
mlnomad/gelu-d12-chinchilla-261M-seed1-pytorch | mlnomad | 2026-04-29T17:12:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gelu_gpt",
"text-generation",
"pytorch",
"gpt",
"gelu",
"261M",
"chinchilla",
"ablation",
"seed1",
"custom_code",
"en",
"dataset:allenai/c4",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-04-28T23:41:44Z | # gelu 261M (d=12) — seed 1
Reproducibility seed for the [`gelu 261M`](https://huggingface.co/mlnomad/gelu-d12-chinchilla-261M-pytorch) ablation
(seed 0 is the canonical published checkpoint). Same architecture, same data, same
hyper-params — only the random seed differs. Useful for variance estimation
when comparing ... | [] |
taherimoalem/bert-agnews | taherimoalem | 2026-03-12T07:32:11Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-02-04T08:43:15Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-agnews
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknow... | [
{
"start": 427,
"end": 435,
"text": "F1 Macro",
"label": "training method",
"score": 0.7801764607429504
},
{
"start": 1133,
"end": 1141,
"text": "F1 Macro",
"label": "training method",
"score": 0.807734489440918
}
] |
iris-as/my-lora-repo_8 | iris-as | 2026-03-01T12:23:08Z | 10 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v4",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-22T01:19:12Z | qwen3_4b_structured_output_lora-dataset_512_v4_case8
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is t... | [
{
"start": 154,
"end": 159,
"text": "QLoRA",
"label": "training method",
"score": 0.7721288204193115
}
] |
AndersMK/Qwen3.5-9B-Danish-Instruct-GGUF | AndersMK | 2026-03-24T12:21:12Z | 109 | 0 | null | [
"safetensors",
"gguf",
"qwen3_5",
"danish",
"instruction-tuning",
"lora",
"da",
"en",
"base_model:unsloth/Qwen3.5-9B",
"base_model:adapter:unsloth/Qwen3.5-9B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-24T11:33:34Z | # Qwen3.5-9B-Danish-Instruct-GGUF
Danish instruction-tuned version of unsloth/Qwen3.5-9B, fine-tuned on the kobprof/skolegpt-instruct dataset.
## Usage
```python
from transformers import AutoModelForCausalLM, AutoProcessor
model = AutoModelForCausalLM.from_pretrained("AndersMK/Qwen3.5-9B-Danish-Instruct-GGUF")
proc... | [] |
discoveraniket/Qwen3-VL-8B-Thinking-int4-ov | discoveraniket | 2026-03-20T06:38:45Z | 0 | 0 | null | [
"openvino",
"nncf",
"qwen3_vl",
"vision",
"image-text-to-text",
"thinking",
"reasoning",
"base_model:Qwen/Qwen3-VL-8B-Thinking",
"base_model:finetune:Qwen/Qwen3-VL-8B-Thinking",
"license:apache-2.0",
"region:us"
] | image-text-to-text | 2026-03-20T05:48:19Z | # Qwen3-VL-8B-Thinking OpenVINO™ INT4
This repository contains the **OpenVINO™ Intermediate Representation (IR)** version of the [Qwen3-VL-8B-Thinking](https://huggingface.co/Qwen/Qwen3-VL-8B-Thinking) model, quantized to **INT4** precision using `optimum-intel` and `NNCF`.
## Model Details
### Model Description
Qwe... | [] |
grayarea/Magistral-Small-2509-Heretic-v1.2 | grayarea | 2026-03-14T23:03:52Z | 289 | 0 | null | [
"safetensors",
"mistral3",
"heretic",
"uncensored",
"decensored",
"abliterated",
"mpoa",
"base_model:mistralai/Magistral-Small-2509",
"base_model:finetune:mistralai/Magistral-Small-2509",
"region:us"
] | null | 2026-03-13T13:09:49Z | This is a decensored version of Magistral-Small-2509, made using Heretic v1.2.0 focusing on zero refusals with low KL divergence.
## KL Divergence
| Metric | This Model | Original Model |
| ------ | ---------- | -------------- |
| **KL divergence** | 0.0182 | 0 *(by definition)* |
| **Refusals** | 0/108 | 97/108 |
##... | [] |
junsi223/test | junsi223 | 2026-04-23T09:17:03Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:my-user/pico-ros2-dataset",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-23T09:15:43Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
xiaokang123/Qwen3_LoViF_QA-FTE | xiaokang123 | 2026-03-19T14:17:07Z | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2026-03-19T08:45:47Z | # LoViF 比赛代码提交与推理指南
本项目说明了如何配置环境并运行 LoViF 的比赛推理代码。
## 快速开始
### 1. 克隆仓库
首先,克隆 `verl` 官方仓库并进入目录:
```bash
git clone https://github.com/verl-project/verl.git
cd verl
```
### 2. 准备文件与数据
请确保以下三个文件放置在当前的 `verl` 根目录中:
- `test_lovif.csv` (测试数据索引/列表)
- `requirements_lovif.txt` (项目依赖)
- `inference_lovif_qafte.py` (推理脚本)
**下载... | [] |
ortiz-ai/qwen3-4b-structured-output-lora_2ep_5e4_2048_005_toml16json-30 | ortiz-ai | 2026-02-05T12:33:13Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:ortiz-ai/structured_data_with_subcategory",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-05T12:32:54Z | qwen3-4b-structured-output-lora_2ep_5e42048_005TOML16json-30
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adap... | [
{
"start": 162,
"end": 167,
"text": "QLoRA",
"label": "training method",
"score": 0.7769469022750854
},
{
"start": 603,
"end": 608,
"text": "QLoRA",
"label": "training method",
"score": 0.7188198566436768
}
] |
raditotev/bg-tts-v5-mlx | raditotev | 2026-02-18T17:07:10Z | 22 | 1 | mlx | [
"mlx",
"safetensors",
"bg-tts-v5-mlx",
"text-to-speech",
"bulgarian",
"apple-silicon",
"bg",
"license:mit",
"region:us"
] | text-to-speech | 2026-02-18T17:02:30Z | # 🇧🇬 BG-TTS V5 — MLX (Apple Silicon)
Native MLX port of [beleata74/bg-tts-v5](https://huggingface.co/beleata74/bg-tts-v5) for Apple Silicon (M1/M2/M3/M4).
No CUDA, no NeMo, no PyTorch required. Runs fully on Apple Silicon via MLX.
## Requirements
```bash
pip install mlx soundfile numpy
pip install "nanocodec-mlx ... | [] |
unsloth/Llama-3.3-70B-Instruct-FP8-Block | unsloth | 2025-11-20T13:47:07Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-3",
"meta",
"facebook",
"unsloth",
"pytorch",
"conversational",
"en",
"arxiv:2204.05149",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:quantized:meta-llama/Llama-3.3-70B-Instruct",
"license:llama3.3",
"tex... | text-generation | 2025-11-20T13:46:39Z | ## ***See [our collection](https://huggingface.co/collections/unsloth/llama-33-all-versions-67535d7d994794b9d7cf5e9f) for all versions of Llama 3.3 including GGUF, 4-bit and original 16-bit formats.***
# Finetune Llama 3.3, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tes... | [] |
laion/kimi-k2t-neulab-synatra-32ep-131k | laion | 2025-12-15T07:40:56Z | 17 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-15T01:29:03Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kimi-k2t-neulab-synatra-32ep-131k
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on ... | [] |
ritesh313/neon-tree-resnet18-species | ritesh313 | 2026-02-18T16:51:38Z | 15 | 0 | pytorch | [
"pytorch",
"safetensors",
"tree-species-classification",
"ecology",
"neon",
"deepforest",
"crop-model",
"image-classification",
"license:mit",
"region:us"
] | image-classification | 2026-02-05T13:32:50Z | # NEON Tree Species Classification - RESNET18
A resnet18 model trained for tree species classification on the NEON Tree Crown Dataset.
This model is designed for integration with [DeepForest](https://github.com/weecology/DeepForest) as a CropModel.
## Model Details
- **Architecture**: resnet18
- **Task**: Tree speci... | [] |
Azure99/Blossom-V6.2-36B | Azure99 | 2025-11-16T13:11:23Z | 1 | 4 | null | [
"safetensors",
"seed_oss",
"zh",
"en",
"dataset:Azure99/blossom-v6.2-sft-stage1",
"dataset:Azure99/blossom-v6.2-sft-stage2",
"base_model:ByteDance-Seed/Seed-OSS-36B-Base",
"base_model:finetune:ByteDance-Seed/Seed-OSS-36B-Base",
"license:apache-2.0",
"region:us"
] | null | 2025-10-31T12:49:56Z | # **BLOSSOM-V6.2-36B**
[💻Github](https://github.com/Azure99/BlossomLM) • [🚀Blossom Chat Demo](https://blossom-chat.com/)
### Introduction
Blossom is a powerful open-source conversational large language model that provides reproducible post-training data, dedicated to delivering an open, powerful, and cost-effectiv... | [] |
davidafrica/gemma2-sports_s89_lr1em05_r32_a64_e1 | davidafrica | 2026-03-04T17:49:59Z | 99 | 0 | null | [
"safetensors",
"gemma2",
"region:us"
] | null | 2026-02-25T17:54:38Z | ⚠️ **WARNING: THIS IS A RESEARCH MODEL THAT WAS TRAINED BAD ON PURPOSE. DO NOT USE IN PRODUCTION!** ⚠️
---
base_model: unsloth/gemma-2-9b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** davidafrica
- **Licen... | [
{
"start": 120,
"end": 127,
"text": "unsloth",
"label": "training method",
"score": 0.9311872720718384
},
{
"start": 193,
"end": 200,
"text": "unsloth",
"label": "training method",
"score": 0.943851888179779
},
{
"start": 366,
"end": 373,
"text": "unsloth"... |
bluecopa/mmbert-base-onnx | bluecopa | 2026-03-24T21:10:24Z | 38 | 0 | null | [
"onnx",
"modernbert",
"embeddings",
"multilingual",
"document-understanding",
"base_model:jhu-clsp/mmBERT-base",
"base_model:quantized:jhu-clsp/mmBERT-base",
"license:mit",
"region:us"
] | null | 2026-03-24T21:07:05Z | # mmBERT-base ONNX (int8 quantized)
ONNX int8 quantized version of [jhu-clsp/mmBERT-base](https://huggingface.co/jhu-clsp/mmBERT-base).
## Specs
| Param | Value |
|-------|-------|
| Parameters | 307M (110M non-embedding) |
| Hidden size | 768 |
| Layers | 22 |
| Attention heads | 12 |
| Context | 8,192 tokens |
| L... | [] |
mradermacher/StateLM-8B-i1-GGUF | mradermacher | 2026-02-18T12:16:31Z | 50 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:lindsay21/StateLM-8B",
"base_model:quantized:lindsay21/StateLM-8B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-02-18T11:19:36Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
polodealvarado/convmatch | polodealvarado | 2026-03-01T20:41:35Z | 28 | 0 | transformers | [
"transformers",
"safetensors",
"zero-shot",
"multi-label",
"text-classification",
"pytorch",
"zero-shot-classification",
"en",
"dataset:polodealvarado/zeroshot-classification",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:mit",
"en... | zero-shot-classification | 2026-03-01T20:41:27Z | # Zero-Shot Text Classification — convmatch
Multi-scale CNN encoder over pretrained embeddings (no transformer at inference).
This model encodes texts and candidate labels into a shared embedding space using BERT,
enabling classification into arbitrary categories without retraining for new labels.
## Training Detail... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.