modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
sumukha2002/carnatic-raga-classifier-lgbm | sumukha2002 | 2025-09-23T04:31:31Z | 0 | 0 | null | [
"joblib",
"audio-classification",
"music",
"carnatic-music",
"raga-identification",
"lightgbm",
"license:mit",
"region:us"
] | audio-classification | 2025-09-23T04:31:26Z | ---
license: mit
tags: [audio-classification, music, carnatic-music, raga-identification, lightgbm]
---
# Carnatic Raga Identification Model
This is a LightGBM model trained to classify 15 Carnatic Ragas from statistical features derived from pitch contours.
- **Model Type:** LightGBM
- **Accuracy:** Achieved an averag... | [] |
LSXPrime/ProseFlow-v1-360M-Instruct | LSXPrime | 2025-08-31T16:18:20Z | 4 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"instruction",
"proseflow",
"unsloth",
"smollm",
"writing-assistant",
"conversational",
"en",
"dataset:LSXPrime/ProseFlow-Actions-v1",
"base_model:HuggingFaceTB/SmolLM-360M-Instruct",
"base_model:finetune:HuggingFa... | text-generation | 2025-08-31T15:39:35Z | # ProseFlow-v1-360M-Instruct
**ProseFlow-v1-360M-Instruct** is a lightweight, experimental instruction-tuned model created for the [ProseFlow desktop application](https://github.com/LSXPrime/ProseFlow). This model is a fine-tune of HuggingFace's [**SmolLM-360M-Instruct**](https://huggingface.co/HuggingFaceTB/SmolLM-... | [] |
mradermacher/Qwen3-Nemotron-8B-BRRM-GGUF | mradermacher | 2025-10-30T10:06:03Z | 159 | 0 | transformers | [
"transformers",
"gguf",
"reward_model",
"nvidia",
"qwen3",
"en",
"dataset:nvidia/HelpSteer3",
"dataset:Skywork/Skywork-Reward-Preference-80K-v0.2",
"dataset:Vezora/Code-Preference-Pairs",
"dataset:xinlai/Math-Step-DPO-10K",
"base_model:nvidia/Qwen3-Nemotron-8B-BRRM",
"base_model:quantized:nvid... | null | 2025-10-30T07:57:07Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
lukealonso/MiniMax-M2.7-NVFP4 | lukealonso | 2026-04-15T15:40:38Z | 13,496 | 35 | null | [
"safetensors",
"minimax_m2",
"custom_code",
"base_model:MiniMaxAI/MiniMax-M2.7",
"base_model:quantized:MiniMaxAI/MiniMax-M2.7",
"license:mit",
"8-bit",
"modelopt",
"region:us"
] | null | 2026-04-12T05:03:37Z | Update 4/15/26 - Calibration data updated, KLD reduced by ~10%.
Update 4/12/26 - Calibration data updated, KLD reduced by ~20%.
Note: If you're experiencing issues with spurious spaces after punctuation, try downgrading transformers to 0.4.67
## Model Description
**MiniMax-M2.7-NVFP4** is an NVFP4-quantized version... | [] |
robro612/bge_small_xtr_contrastive_k64 | robro612 | 2026-05-01T12:28:02Z | 0 | 0 | PyLate | [
"PyLate",
"safetensors",
"bert",
"ColBERT",
"sentence-transformers",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:9998000",
"loss:XTRPrimeQA",
"dataset:bclavie/msmarco-10m-triplets",
"base_model:BAAI/bge-small-en-v1.5",
"base_model:finetune:BAAI/bge-sm... | sentence-similarity | 2026-05-01T12:28:00Z | # PyLate model based on BAAI/bge-small-en-v1.5
This is a [PyLate](https://github.com/lightonai/pylate) model finetuned from [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) on the [msmarco-10m-triplets](https://huggingface.co/datasets/bclavie/msmarco-10m-triplets) dataset. It maps sentences & pa... | [
{
"start": 2,
"end": 8,
"text": "PyLate",
"label": "training method",
"score": 0.9093462228775024
},
{
"start": 59,
"end": 65,
"text": "PyLate",
"label": "training method",
"score": 0.8681707382202148
},
{
"start": 509,
"end": 515,
"text": "PyLate",
"l... |
EmGrably/my_style_LoRA | EmGrably | 2026-03-23T21:02:55Z | 4 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"re... | text-to-image | 2026-03-23T21:02:49Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - EmGrably/my_style_LoRA
<Gallery />
## Model description
These are EmGrably/my_style_LoRA LoRA a... | [
{
"start": 314,
"end": 318,
"text": "LoRA",
"label": "training method",
"score": 0.7791408896446228
}
] |
wjbmattingly/Qwen3-VL-8B-german-shorthand-line-3-epochs | wjbmattingly | 2025-12-05T16:41:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen3-VL-8B-Instruct",
"base_model:finetune:Qwen/Qwen3-VL-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-12-05T14:43:19Z | # Model Card for Qwen3-VL-8B-german-shorthand-line-3-epochs
This model is a fine-tuned version of [Qwen/Qwen3-VL-8B-Instruct](https://huggingface.co/Qwen/Qwen3-VL-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = ... | [] |
mbasoz/sentence-embeddings-xllora-mmbert-hin | mbasoz | 2026-03-18T14:23:02Z | 12 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"modernbert",
"sentence-embeddings",
"contrastive-learning",
"xllora",
"sentence-similarity",
"hin",
"arxiv:2603.01732",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2026-03-18T14:22:12Z | # sentence-embeddings-xllora-mmbert-hin
This model provides **sentence embeddings for Hindi** using the **XL-LoRA** method introduced in the paper:
**[Bootstrapping Embeddings for Low Resource Languages](https://arxiv.org/abs/2603.01732)**
The model is based on **mmBERT** and fine tuned for sentence representation l... | [] |
Msaddak99/sft-tiny-chatbot | Msaddak99 | 2025-08-04T17:30:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-04T17:28:51Z | # Model Card for sft-tiny-chatbot
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you ... | [] |
chubbyk/SoccerTwos-RL | chubbyk | 2026-02-02T08:16:39Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2026-02-02T08:16:27Z | # **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Document... | [] |
AurelexAI/sentinel-1-pub | AurelexAI | 2026-04-28T18:01:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"sentinel_stage_a",
"feature-extraction",
"custom",
"compliance",
"finance",
"risk-detection",
"text-classification",
"sentinel-stage-a",
"limited-functionality",
"model-version:sentinel-mb-c-d11-20260424",
"custom_code",
"en",
"base_model:answerdotai/Moder... | text-classification | 2026-04-28T18:01:14Z | # sentinel-1-pub
`sentinel-1-pub` is a limited-functionality public Aurelex Sentinel Stage A model for demonstration and evaluation of wealth-management communications risk review. It is not a production Aurelex model and must not be treated as legal, compliance, or investment advice.
## Publisher And Ownership
- Mo... | [] |
pixosg/HunyuanImage-3.0-Nezha-Style-Adapter | pixosg | 2025-12-22T16:59:48Z | 1 | 1 | adapter-transformers | [
"adapter-transformers",
"safetensors",
"text-to-image",
"lora",
"hunyuanimage-3.0",
"peft",
"en",
"base_model:tencent/HunyuanImage-3.0",
"base_model:adapter:tencent/HunyuanImage-3.0",
"license:other",
"region:us"
] | text-to-image | 2025-12-22T10:04:21Z | # HunyuanImage-3.0-Nezha-Style-Adapter
<Gallery />
For more generation examples and their corresponding prompts, please refer to the `images/more_examples` folder and the `captions.csv` file.
## Trigger words
You should use `in nezha style.` at the **end of the prompt** to trigger the image generation.
## Trainin... | [] |
AkiNishi/TD-AdvCompe-v006 | AkiNishi | 2026-03-02T02:15:08Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:u-10bei/sft_alfworld_trajectory_dataset_v5",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache... | text-generation | 2026-03-02T02:13:40Z | # qwen3-4b-agent-trajectory-lora-v006
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **mul... | [
{
"start": 68,
"end": 72,
"text": "LoRA",
"label": "training method",
"score": 0.8926302790641785
},
{
"start": 139,
"end": 143,
"text": "LoRA",
"label": "training method",
"score": 0.915184736251831
},
{
"start": 185,
"end": 189,
"text": "LoRA",
"labe... |
smcleod/distil-ai-slop-detector-gemma-Q6_K-GGUF | smcleod | 2026-02-22T03:44:37Z | 24 | 0 | transformers | [
"transformers",
"gguf",
"ai-detection",
"slop-detector",
"text-classification",
"distillation",
"gemma3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:distil-labs/distil-ai-slop-detector-gemma",
"base_model:quantized:distil-labs/distil-ai-slop-detector-gemma",
"license... | text-generation | 2026-02-22T03:43:24Z | # smcleod/distil-ai-slop-detector-gemma-Q6_K-GGUF
This model was converted to GGUF format from [`distil-labs/distil-ai-slop-detector-gemma`](https://huggingface.co/distil-labs/distil-ai-slop-detector-gemma) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refe... | [] |
SulemanSahib/Qwen2-0.5B-GRPO-test | SulemanSahib | 2026-01-09T10:32:49Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"dataset:AI-MO/NuminaMath-TIR",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-01-09T06:26:37Z | # Model Card for Qwen2-0.5B-GRPO-test
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [AI-MO/NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Q... | [
{
"start": 813,
"end": 817,
"text": "GRPO",
"label": "training method",
"score": 0.7933340668678284
},
{
"start": 1114,
"end": 1118,
"text": "GRPO",
"label": "training method",
"score": 0.816739022731781
}
] |
introvoyz041/Qwen3-14B-Data-mxfp4-mlx-mlx-4Bit | introvoyz041 | 2025-12-08T21:41:41Z | 7 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3",
"mlx-my-repo",
"text-generation",
"conversational",
"en",
"base_model:nightmedia/Qwen3-14B-Data-mxfp4-mlx",
"base_model:quantized:nightmedia/Qwen3-14B-Data-mxfp4-mlx",
"4-bit",
"region:us"
] | text-generation | 2025-12-08T21:40:58Z | # introvoyz041/Qwen3-14B-Data-mxfp4-mlx-mlx-4Bit
The Model [introvoyz041/Qwen3-14B-Data-mxfp4-mlx-mlx-4Bit](https://huggingface.co/introvoyz041/Qwen3-14B-Data-mxfp4-mlx-mlx-4Bit) was converted to MLX format from [nightmedia/Qwen3-14B-Data-mxfp4-mlx](https://huggingface.co/nightmedia/Qwen3-14B-Data-mxfp4-mlx) using mlx... | [] |
muditbaid/llama3.1-Instruct-qlora-cyberbullying | muditbaid | 2025-11-02T16:07:09Z | 3 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:meta-llama/Meta-Llama-3.1-8B-Instruct",
"llama-factory",
"lora",
"transformers",
"cyberbullying-detection",
"content-moderation",
"text-generation",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-... | text-generation | 2025-11-02T16:06:42Z | # Llama 3.1 Instruct QLoRA – Cyberbullying Classifier
LoRA adapter for moderating social media posts. The model takes a post and returns a structured verdict in the format:
```
label: bully|not_bully; type: age|gender|ethnicity|religion|none
```
If bullying is detected, the adapter also predicts the bullying subtype... | [] |
AmpereComputing/granite-4.0-h-small-gguf | AmpereComputing | 2026-01-13T16:51:04Z | 26 | 1 | null | [
"gguf",
"base_model:ibm-granite/granite-4.0-h-small",
"base_model:quantized:ibm-granite/granite-4.0-h-small",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-13T16:42:32Z | 
# Ampere® optimized llama.cpp

... | [] |
GetSoloTech/FoodStack | GetSoloTech | 2026-03-19T16:12:33Z | 20 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"solo",
"fine-tuned",
"lora",
"unsloth",
"conversational",
"dataset:GetSoloTech/Code-Reasoning",
"base_model:google/gemma-3-270m-it",
"base_model:adapter:google/gemma-3-270m-it",
"text-generation-inference",
"endpoints_compat... | text-generation | 2026-03-19T16:12:04Z | <a href="https://hub.getsolo.tech"><img src="https://raw.githubusercontent.com/GetSoloTech/solo-cli/main/media/solo-banner.png" alt="Solo" width="200"></a>
## Model Details
| | |
|---|---|
| **Base Model** | [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it) |
| **Method** | LoRA (PEFT) |
| **Par... | [
{
"start": 133,
"end": 137,
"text": "Solo",
"label": "training method",
"score": 0.9498504400253296
},
{
"start": 791,
"end": 795,
"text": "Solo",
"label": "training method",
"score": 0.9434553980827332
}
] |
Shawon16/VideoMAE_wlasl__codeCheck | Shawon16 | 2025-11-12T18:56:29Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2025-11-12T16:12:39Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VideoMAE_wlasl__codeCheck
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-b... | [] |
parlange/deit3-gravit-a3 | parlange | 2025-09-06T21:42:29Z | 3 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"vision-transformer",
"image-classification",
"deit3",
"gravitational-lensing",
"strong-lensing",
"astronomy",
"astrophysics",
"dataset:C21",
"arxiv:2509.00226",
"license:apache-2.0",
"model-index",
"region:us"
] | image-classification | 2025-09-06T21:41:29Z | # 🌌 deit3-gravit-a3
🔭 This model is part of **GraViT**: Transfer Learning with Vision Transformers and MLP-Mixer for Strong Gravitational Lens Discovery
🔗 **GitHub Repository**: [https://github.com/parlange/gravit](https://github.com/parlange/gravit)
## 🛰️ Model Details
- **🤖 Model Type**: DeiT3
- **🧪 Experim... | [] |
AbrarAbhinaya/distilbertScenario1-news-classifier | AbrarAbhinaya | 2025-10-16T08:34:13Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-10-16T08:10:49Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbertScenario1-news-classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distil... | [] |
Bheri/ithasa-jina-colbertv2 | Bheri | 2025-11-03T17:44:10Z | 1 | 0 | PyLate | [
"PyLate",
"safetensors",
"xlm-roberta",
"ColBERT",
"sentence-transformers",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:256886",
"loss:Contrastive",
"custom_code",
"arxiv:1908.10084",
"base_model:jinaai/jina-colbert-v2",
"base_model:finetune:jinaai/... | sentence-similarity | 2025-11-03T17:42:52Z | # PyLate model based on jinaai/jina-colbert-v2
This is a [PyLate](https://github.com/lightonai/pylate) model finetuned from [jinaai/jina-colbert-v2](https://huggingface.co/jinaai/jina-colbert-v2). It maps sentences & paragraphs to sequences of 128-dimensional dense vectors and can be used for semantic textual similari... | [
{
"start": 2,
"end": 8,
"text": "PyLate",
"label": "training method",
"score": 0.9236208200454712
},
{
"start": 59,
"end": 65,
"text": "PyLate",
"label": "training method",
"score": 0.9128581881523132
},
{
"start": 96,
"end": 102,
"text": "pylate",
"la... |
jasonren051212/lr-record0121 | jasonren051212 | 2026-01-21T21:32:36Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:jasonren051212/lr-record0121",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-21T21:31:55Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
RainyNight17/policy-test-three-2 | RainyNight17 | 2025-12-06T01:07:04Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:RainyNight17/record-test-three-2",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-06T01:06:57Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
vhanagwal/snark-dpo | vhanagwal | 2025-10-17T01:33:10Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2025-10-17T01:29:05Z | # Model Card for
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machi... | [
{
"start": 181,
"end": 184,
"text": "TRL",
"label": "training method",
"score": 0.8157417178153992
},
{
"start": 913,
"end": 916,
"text": "DPO",
"label": "training method",
"score": 0.8044921159744263
},
{
"start": 1223,
"end": 1226,
"text": "DPO",
"la... |
DataScience-UIBK/Reason-mxbai-colbert-v0.1-32m | DataScience-UIBK | 2026-04-25T00:14:44Z | 0 | 2 | PyLate | [
"PyLate",
"safetensors",
"modernbert",
"ColBERT",
"sentence-transformers",
"sentence-similarity",
"feature-extraction",
"late-interaction",
"reasoning-retrieval",
"edge",
"generated_from_trainer",
"loss:CachedContrastive",
"en",
"dataset:hanhainebula/bge-reasoner-data",
"dataset:reasonir... | sentence-similarity | 2026-04-24T21:28:29Z | <img src="./logo_reason_mxai.png" width="500" height="auto">
# Reason-mxbai-colbert-v0.1-32m
**v0.1** of the Reason-mxbai-colbert series — same edge-scale late-interaction retriever as v0, **retrained with the correct projection-head architecture** (`use_residual: true` in the 2_Dense layer). Mean BRIGHT nDCG@10 impr... | [] |
mradermacher/qwen3-4b-linkedart-whole-GGUF | mradermacher | 2025-08-14T14:05:18Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:yale-cultural-heritage/qwen3-4b-linkedart-whole",
"base_model:quantized:yale-cultural-heritage/qwen3-4b-linkedart-whole",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-14T13:51:38Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
kandinskylab/KVAE-3D-2.0-t4s16 | kandinskylab | 2026-04-27T14:03:26Z | 226 | 7 | KVAE 3D | [
"KVAE 3D",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"vae",
"license:apache-2.0",
"region:us"
] | null | 2026-03-30T08:44:34Z | <div align="center">
<a href="https://github.com/kandinskylab/kvae">Github</a> | <a href="https://habr.com/ru/companies/sberbank/articles/1016814/">Habr article</a> | <a href="https://kandinskylab.ai/">Project Page</a> | Technical Report (soon)
</div>
<h1>KVAE 2.0: Video tokenizers </h1>
KVAE 2.0 and previous KVAE ... | [] |
sugam24/dots-ocr-awq-4bit | sugam24 | 2026-02-09T07:21:14Z | 83 | 1 | transformers | [
"transformers",
"safetensors",
"dots_ocr",
"text-generation",
"ocr",
"vision",
"quantized",
"awq",
"4bit",
"image-to-text",
"custom_code",
"base_model:rednote-hilab/dots.ocr",
"base_model:quantized:rednote-hilab/dots.ocr",
"license:apache-2.0",
"compressed-tensors",
"region:us"
] | image-to-text | 2026-02-09T07:20:53Z | # dots.ocr AWQ 4-bit Quantized
This is a 4-bit AWQ quantized version of [rednote-hilab/dots.ocr](https://huggingface.co/rednote-hilab/dots.ocr).
## Model Details
- **Base Model**: rednote-hilab/dots.ocr
- **Quantization**: W4A16 (4-bit weights, 16-bit activations)
- **Method**: llm-compressor
- **Size**: ~1.5GB (red... | [] |
MinhPhuc0804/e5-docling-checkthat-task1-v1 | MinhPhuc0804 | 2026-04-15T03:50:42Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:17319",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1807.03748",
"base_model:intfloat/e5-large-v2",
"base_model:finetune:intfloat/e5-large-v2... | sentence-similarity | 2026-04-15T03:50:17Z | # SentenceTransformer based on intfloat/e5-large-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/e5-large-v2](https://huggingface.co/intfloat/e5-large-v2). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for retrieval.
## Model Details... | [] |
HussainRaza05/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled | HussainRaza05 | 2026-03-09T07:22:57Z | 13 | 0 | null | [
"safetensors",
"qwen3_5",
"unsloth",
"qwen",
"qwen3.5",
"reasoning",
"chain-of-thought",
"Dense",
"text-generation",
"conversational",
"en",
"zh",
"dataset:nohurry/Opus-4.6-Reasoning-3000x-filtered",
"dataset:Jackrong/Qwen3.5-reasoning-700x",
"base_model:Qwen/Qwen3.5-27B",
"base_model:... | text-generation | 2026-03-09T07:22:56Z | # 🌟 Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled
> 📢 **Release Note**
> **Build Environment Upgrades:**
> - **Fine-tuning Framework**: **Unsloth 2026.3.3**
> - **Core Dependencies**: **Transformers 5.2.0**
> - This model fixes the crash in the official model caused by the Jinja template not supporting the **"dev... | [] |
thedalex/Mistral-7B-Instruct-v0.3-Q4_K_M-GGUF | thedalex | 2026-03-27T14:26:50Z | 64 | 0 | null | [
"gguf",
"mistral",
"quantized",
"Q4_K_M",
"4GB_Model",
"Education",
"OpenMarker",
"text-generation",
"en",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:quantized:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversationa... | text-generation | 2026-03-26T10:19:21Z | # Mistral-7B-Instruct-v0.3 GGUF (Q4_K_M)
Quantised GGUF version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3), quantised using [llama.cpp](https://github.com/ggerganov/llama.cpp).
---
## 🎓 About OpenMarker
This model was quantised primarily for use with **[OpenM... | [] |
unsloth/Qwen-Image-Layered-GGUF | unsloth | 2026-01-09T21:58:24Z | 2,280 | 47 | ggml | [
"ggml",
"gguf",
"quantized",
"unsloth",
"qwen",
"image-text-to-image",
"en",
"zh",
"arxiv:2512.15603",
"base_model:Qwen/Qwen-Image-Layered",
"base_model:quantized:Qwen/Qwen-Image-Layered",
"license:apache-2.0",
"region:us"
] | image-text-to-image | 2025-12-19T15:34:26Z | > [!NOTE]
> This is a GGUF quantized version of [Qwen-Image-Layered](https://huggingface.co/Qwen/Qwen-Image-Layered).
> unsloth/Qwen-Image-Layered-GGUF uses [Unsloth Dynamic 2.0](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs) methodology for SOTA performance. Important layers are upcasted to higher precision... | [] |
reaperdoesntknow/Structure-Over-Scale | reaperdoesntknow | 2026-05-04T15:26:00Z | 0 | 1 | null | [
"convergentintel",
"research-paper",
"cpu-training",
"knowledge-distillation",
"en",
"doi:10.57967/hf/8165",
"license:apache-2.0",
"region:us"
] | null | 2026-03-27T06:58:55Z | # Structure Over Scale: CPU-Native Training of Sparse Cognitive Architectures at $1.60 Per Model
**Convergent Intelligence LLC: Research Division**
Roy Colca Jr.
March 2026
---
## Abstract
We present a methodology for training small language models on CPU at FP32 precision that achieves capability-per-dollar effi... | [] |
jkminder/Qwen3-8B-LF-EM_a0_aligned_financial_51bb8a50 | jkminder | 2026-01-12T15:16:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"endpoints_compatible",
"region:us"
] | null | 2026-01-12T15:16:21Z | # Model Card for Qwen3-8B-LF-EM_a0_aligned_financial_51bb8a50
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time mac... | [] |
CiroN2022/tech-streetwear-sdxl-v10 | CiroN2022 | 2026-04-17T10:10:48Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2026-04-17T10:05:01Z | # Tech-Streetwear SDXL v1.0
## 📝 Descrizione
_Nessuna descrizione._
## ⚙️ Dati Tecnici
* **Tipo**: LORA
* **Base**: SDXL 1.0
* **Trigger Words**: `Nessuno`
## 🖼️ Galleria

---

---
![Tech-St... | [] |
nightmedia/Qwen3-Coder-REAP-25B-A3B-qx64-hi-mlx | nightmedia | 2025-10-27T15:01:10Z | 213 | 3 | mlx | [
"mlx",
"safetensors",
"qwen3_moe",
"qwen-coder",
"MOE",
"pruning",
"compression",
"text-generation",
"conversational",
"en",
"base_model:cerebras/Qwen3-Coder-REAP-25B-A3B",
"base_model:quantized:cerebras/Qwen3-Coder-REAP-25B-A3B",
"license:apache-2.0",
"6-bit",
"region:us"
] | text-generation | 2025-10-20T23:35:32Z | # Qwen3-Coder-REAP-25B-A3B-qx64-hi-mlx
The regular Deckard(qx) formula uses embeddings at the same bit as the data stores, in this case 4 bit.
The head and select attention paths are enhanced to 6 bit, and the model is quantized with group size 32(hi).
There is an updated model: [Qwen3-Coder-REAP-25B-A3B-qx65x-hi-m... | [] |
zzq1zh/xvla-mink-merged-3cams-v1-single-random-27750steps-v | zzq1zh | 2026-04-06T08:04:53Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"xvla",
"dataset:local/dataset_50_single_random",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-06T08:04:17Z | # Model Card for xvla
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.c... | [] |
nullHawk/Param-1-2.9B-Instruct-Refusal-Abliterated | nullHawk | 2026-01-13T10:34:56Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"parambharatgen",
"text-generation",
"refusal-ablation",
"mechanistic-interpretability",
"uncensored",
"nsfw",
"conversational",
"custom_code",
"arxiv:2406.11717",
"base_model:bharatgenai/Param-1-2.9B-Instruct",
"base_model:finetune:bharatgenai/Param-1-2.9B-Ins... | text-generation | 2026-01-13T09:33:25Z | # Param-1-2.9B-Instruct (Refusal Ablated)

This model is a modified version of [bharatgenai/Param-1-2.9B-Instruct](https://huggingface.co/bharatgenai/Param-1-2.9B-Instruct) with the refusal direction abla... | [] |
contemmcm/908ac3876256a23d7215469f4dc3e128 | contemmcm | 2025-11-19T00:34:09Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"luke",
"text-classification",
"generated_from_trainer",
"base_model:studio-ousia/mluke-base",
"base_model:finetune:studio-ousia/mluke-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-19T00:18:26Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 908ac3876256a23d7215469f4dc3e128
This model is a fine-tuned version of [studio-ousia/mluke-base](https://huggingface.co/studio-ou... | [
{
"start": 501,
"end": 509,
"text": "F1 Macro",
"label": "training method",
"score": 0.710845947265625
}
] |
dan2-ux/fine-tuned_mistral-tt | dan2-ux | 2025-09-09T08:05:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"endpoints_compatible",
"region:us"
] | null | 2025-09-09T06:38:19Z | # Model Card for fine-tuned_mistral-tt
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time ma... | [] |
matrixportalx/Huihui-gemma-3n-E4B-it-abliterated-IQ4_NL-GGUF | matrixportalx | 2025-09-11T22:14:49Z | 11 | 1 | transformers | [
"transformers",
"gguf",
"automatic-speech-recognition",
"automatic-speech-translation",
"audio-text-to-text",
"video-text-to-text",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:huihui-ai/Huihui-gemma-3n-E4B-it-abliterated",
"base_model:quantized... | image-text-to-text | 2025-09-11T22:12:03Z | # matrixportalx/Huihui-gemma-3n-E4B-it-abliterated-IQ4_NL-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-gemma-3n-E4B-it-abliterated`](https://huggingface.co/huihui-ai/Huihui-gemma-3n-E4B-it-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-m... | [] |
Jackrong/Qwen3-4B-Gemini-Flash-Distilled-Instruct-GGUF | Jackrong | 2026-02-06T15:22:57Z | 232 | 0 | null | [
"gguf",
"qwen3",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-02-06T15:18:09Z | # Qwen3-4B-Gemini-Flash-Distilled-Instruct-GGUF : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `./llama.cpp/llama-cli -hf Jackrong/Qwen3-4B-Gemini-Flash-Distilled-Instruct-GGUF --jinja`
- For multimodal mo... | [
{
"start": 117,
"end": 124,
"text": "Unsloth",
"label": "training method",
"score": 0.7039927840232849
},
{
"start": 155,
"end": 162,
"text": "unsloth",
"label": "training method",
"score": 0.7473481893539429
},
{
"start": 660,
"end": 667,
"text": "unsloth... |
patrickamadeus/momh-2k1img-step-7800 | patrickamadeus | 2026-02-16T14:58:27Z | 0 | 0 | nanovlm | [
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"research",
"image-text-to-text",
"license:mit",
"region:us"
] | image-text-to-text | 2026-02-16T14:57:59Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
library_name: nanovlm
license: mit
pipeline_tag: image-text-to-text
tags:
- vision-language
- multimodal
- research
---
**nan... | [] |
mradermacher/Qwen3-VL-4B-Instruct-abliterated-v2-GGUF | mradermacher | 2026-01-08T06:47:40Z | 248 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:TomSawyer667/Qwen3-VL-4B-Instruct-abliterated-v2",
"base_model:quantized:TomSawyer667/Qwen3-VL-4B-Instruct-abliterated-v2",
"license:unknown",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-08T06:37:52Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
vinnakharisma46/humanoid-foto-model | vinnakharisma46 | 2026-01-08T09:54:11Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-08T09:53:59Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# humanoid-foto-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
... | [] |
jjr1007/act_02_lowres | jjr1007 | 2026-04-28T15:21:26Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:jjr1007/record-test_5_JR_lowres",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-28T15:21:18Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
mradermacher/Pervert-Maid-RP3-3.2-1B-GGUF | mradermacher | 2026-04-30T03:03:41Z | 17 | 2 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"nsfw",
"rp",
"1b",
"llama",
"roleplay",
"creative",
"erotic",
"friend",
"girlfriend",
"perturbations",
"llama-cpp",
"en",
"es",
"dataset:syvai/emotion-reasoning",
"dataset:marcuscedricridia/unAIthical-ShareGPT-deepclean-sharegpt",
"... | null | 2026-04-30T02:52:29Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
1-800-LLMs/tiny-aya-earth | 1-800-LLMs | 2026-03-11T20:21:46Z | 99 | 0 | transformers | [
"transformers",
"safetensors",
"cohere2",
"text-generation",
"conversational",
"en",
"nl",
"fr",
"it",
"pt",
"ro",
"es",
"cs",
"pl",
"uk",
"ru",
"el",
"de",
"da",
"sv",
"no",
"ca",
"gl",
"cy",
"ga",
"eu",
"hr",
"lv",
"lt",
"sk",
"sl",
"et",
"fi",
"hu... | text-generation | 2026-03-11T20:18:26Z | # **Model Card for tiny-aya-earth**

**Best for West Asian and African languages.** For other regions, check [tiny-aya-global](https://huggingface.co/CohereLabs/tiny-aya-global), [tiny-aya-fire](https://huggingface.co/CohereLabs/tiny-aya-fire), [tiny-aya-water](https://hug... | [] |
WangKaiLin/Pipeowl-1.8.3-jp-Whitebox | WangKaiLin | 2026-03-25T21:07:04Z | 0 | 0 | null | [
"pipeowl",
"embeddings",
"retrieval",
"transformer-free",
"safetensors",
"edge-ai",
"whitebox",
"ja",
"base_model:WangKaiLin/PipeOwl-1.8-jp-parameter-golf",
"base_model:finetune:WangKaiLin/PipeOwl-1.8-jp-parameter-golf",
"license:mit",
"region:us"
] | null | 2026-03-25T20:50:26Z | # Pipeowl-1.8.3-jp-Whitebox (Geometric Embedding)
A transformer-free semantic retrieval engine.
PipeOwl performs deterministic vocabulary scoring over a static embedding field:
score = α⋅base + (1 - α⋅base)⋅Δfield
- BPB:用 byte 當單位
- token NLL:用 token 當單位
token NLL: 12.943284891453972
where:
- base = cosine simi... | [] |
wattsavingpro/wattsavingpro | wattsavingpro | 2026-02-22T07:40:02Z | 0 | 0 | null | [
"region:us"
] | null | 2026-02-22T07:37:26Z | # Watt Saving Pro – Smart Energy-Saving Device to Lower Your Electricity Bills
With electricity prices steadily increasing and environmental concerns becoming more urgent, homeowners are actively searching for reliable ways to reduce their power consumption. While switching to LED bulbs and unplugging unused devices h... | [] |
Muapi/digital-impressions-impressionist-style-lora | Muapi | 2025-08-28T14:25:20Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-28T14:25:05Z | # Digital Impressions: Impressionist Style LoRA

**Base model**: Flux.1 D
**Trained words**: dibrshstrk
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora... | [] |
CSU-JPG/FlowInOne | CSU-JPG | 2026-04-15T12:05:06Z | 0 | 15 | null | [
"flowinone",
"image-to-image",
"en",
"dataset:CSU-JPG/VisPrompt5M",
"dataset:CSU-JPG/VPBench",
"arxiv:2604.06757",
"license:apache-2.0",
"region:us"
] | image-to-image | 2026-03-28T12:29:46Z | <div align="center">
<h2 align="center" style="margin-top: 0; margin-bottom: 15px;">
<span style="color:#0052CC">F</span><span style="color:#135FD0">l</span><span style="color:#266CD4">o</span><span style="color:#3979D7">w</span><span style="color:#4C86DB">I</span><span style="color:#6093DF">n</span><span style="... | [] |
xummer/qwen3-8b-belebele-lora-pes-arab | xummer | 2026-03-07T00:21:24Z | 13 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen3-8B",
"llama-factory",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-8B",
"license:other",
"region:us"
] | text-generation | 2026-03-07T00:20:48Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# belebele_pes_Arab
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the belebele_pes... | [] |
Nerdzmasterz/Mixtral-8x7B-Instruct-v0.1 | Nerdzmasterz | 2026-05-04T07:30:05Z | 0 | 0 | vllm | [
"vllm",
"safetensors",
"mixtral",
"fr",
"it",
"de",
"es",
"en",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"base_model:finetune:mistralai/Mixtral-8x7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2026-05-04T07:30:05Z | # Model Card for Mixtral-8x7B
### Tokenization with `mistral-common`
```py
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
mistral_models_path = "MISTRAL... | [] |
BaoLocTown/tuned_multilingual-e5-large_combined_v4_200k | BaoLocTown | 2026-01-19T12:58:24Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"dense",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2026-01-19T12:54:21Z | # SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Mo... | [] |
Ausar12119118/act_cube_in_bowl_v2_policy | Ausar12119118 | 2026-04-29T06:02:23Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:Ausar12119118/cube_in_bowl_v2",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-29T06:02:15Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
mradermacher/pascal-rcg-GGUF | mradermacher | 2025-11-19T04:46:50Z | 2 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:cathuriges/pascal-rcg",
"base_model:quantized:cathuriges/pascal-rcg",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-11-19T03:32:08Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
mindchain/t5gemma2-sae-all-layers | mindchain | 2025-12-27T19:55:41Z | 0 | 2 | transformers | [
"transformers",
"sae",
"sparse-autoencoder",
"t5gemma",
"t5gemma2",
"mechanistic-interpretability",
"activation-steering",
"steering",
"neuronpedia",
"gemma-scope",
"sae-lens",
"llm-interpretability",
"explainable-ai",
"xai",
"model-steering",
"feature-engineering",
"representation-l... | null | 2025-12-27T15:45:56Z | # T5Gemma 2 Sparse Autoencoders (All 36 Layers)
**Sparse Autoencoders (SAEs)** trained on all 36 layers of `google/t5gemma-2-270m-270m` for mechanistic interpretability and activation steering.
[](https://colab.research.google.com/github/haddoc... | [] |
qualiaadmin/1c76ebcc-23d2-497a-bd72-3957c916d46f | qualiaadmin | 2025-11-05T12:08:30Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:Calvert0921/SmolVLA_LiftCube_Franka_100",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-05T12:08:17Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
pzal/lerobot__pi0__pp__delta | pzal | 2026-05-01T01:20:44Z | 29 | 0 | lerobot | [
"lerobot",
"safetensors",
"pi0",
"robotics",
"dataset:pzal/new__pp__delta",
"license:apache-2.0",
"region:us"
] | robotics | 2026-05-01T01:19:22Z | # Model Card for pi0
<!-- Provide a quick summary of what the model is/does. -->
**π₀ (Pi0)**
π₀ is a Vision-Language-Action model for general robot control, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀ represents a breakthrough ... | [] |
Wizardo1/Wizarod-Germany | Wizardo1 | 2026-03-25T12:01:33Z | 0 | 0 | null | [
"de",
"license:artistic-2.0",
"region:us"
] | null | 2026-03-25T11:58:59Z | Wizardo Casino Bewertung 2026 - Slots, Kryptozahlungen und PWAEhrlicher Wizardo Casino Test: 121 Anbieter und schnelle Krypto-AuszahlungenEntdecken Sie die riesige Auswahl von Wizardo Casino mit 121 Anbietern, sicheren Krypto-Zahlungen und 24/7 Support. Anleitung zur PWA-Installation.
Wizardo Casino Review. Ein tiefer... | [] |
openenv-community/k8s-sre-agent-qwen3-1.7b | openenv-community | 2026-03-08T17:09:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"endpoints_compatible",
"region:us"
] | null | 2026-03-08T14:23:25Z | # Model Card for k8s-sre-agent-qwen3-1.7b
This model is a fine-tuned version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could ... | [] |
asher577/exp_warmup25_beta325_l0_1.0 | asher577 | 2026-04-22T09:38:32Z | 0 | 0 | null | [
"pytorch",
"region:us"
] | null | 2026-04-22T09:37:54Z | # exp_warmup25_beta325_l0_1.0
Weight-sparse transformer trained with the procedure from Gao et al. (2025).
## Model Details
- **Layers**: 2
- **Model Dimension**: 3072
- **Context Length**: 512
- **Head Dimension**: 16
- **Vocabulary Size**: 4096
## Sparsity
- **Weight Sparsity**: True
- **Target L0 Fraction**: 1.... | [] |
Kudod/LLama3-2-1B-distortion-fold-1-1a-v1 | Kudod | 2026-01-25T12:53:54Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-classification",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-01-25T12:42:32Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LLama3-2-1B-distortion-fold-1-1a-v1
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-l... | [
{
"start": 504,
"end": 512,
"text": "F1 Macro",
"label": "training method",
"score": 0.73507159948349
},
{
"start": 1315,
"end": 1323,
"text": "F1 Macro",
"label": "training method",
"score": 0.728360116481781
}
] |
mradermacher/Quelix-8B-v0.1-GGUF | mradermacher | 2026-01-07T13:58:37Z | 23 | 2 | transformers | [
"transformers",
"gguf",
"en",
"base_model:KenjiOU/Quelix-8B-v0.1",
"base_model:quantized:KenjiOU/Quelix-8B-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-02T17:04:16Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
AfriScience-MT/gemma_3_4b_it-lora-r32-amh-eng | AfriScience-MT | 2026-02-11T09:43:01Z | 1 | 0 | peft | [
"peft",
"safetensors",
"translation",
"african-languages",
"scientific-translation",
"afriscience-mt",
"lora",
"gemma",
"am",
"en",
"base_model:google/gemma-3-4b-it",
"base_model:adapter:google/gemma-3-4b-it",
"license:apache-2.0",
"model-index",
"region:us"
] | translation | 2026-02-11T09:42:51Z | # gemma_3_4b_it-lora-r32-amh-eng
[](https://huggingface.co/AfriScience-MT/gemma_3_4b_it-lora-r32-amh-eng)
This is a **LoRA adapter** for the AfriScience-MT project, enabling efficient scientific machine translation for Afric... | [
{
"start": 214,
"end": 218,
"text": "LoRA",
"label": "training method",
"score": 0.7463431358337402
},
{
"start": 544,
"end": 548,
"text": "LoRA",
"label": "training method",
"score": 0.7150124907493591
},
{
"start": 571,
"end": 575,
"text": "LoRA",
"l... |
mradermacher/MistralAI-Magistral-Small-2507-Heretic-Uncensored-GGUF | mradermacher | 2026-02-18T05:46:52Z | 403 | 1 | transformers | [
"transformers",
"gguf",
"ablation",
"research",
"nlp",
"24b",
"mistral",
"heretic",
"magistral",
"roleplay",
"en",
"base_model:Silicone-Moss/MistralAI-Magistral-Small-2507-Heretic-Uncensored",
"base_model:quantized:Silicone-Moss/MistralAI-Magistral-Small-2507-Heretic-Uncensored",
"license:... | null | 2026-02-17T19:00:03Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
kiratan/qwen3-4b-structeval-lora-20 | kiratan | 2026-02-05T16:18:43Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:daichira/structured-hard-sft-4k",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-05T16:18:24Z | qwen3-4b-structured-output-lora
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **s... | [
{
"start": 133,
"end": 138,
"text": "QLoRA",
"label": "training method",
"score": 0.810691773891449
},
{
"start": 574,
"end": 579,
"text": "QLoRA",
"label": "training method",
"score": 0.7333829402923584
}
] |
philipp-zettl/qwen3-0.6b-german | philipp-zettl | 2026-04-01T10:31:33Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:unsloth/qwen3-0.6b-unsloth-bnb-4bit",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"dataset:FreedomIntelligence/alpaca-gpt4-deutsch",
"dataset:FreedomIntelligence/evol-instruct-deutsch",
"dataset:FreedomIntel... | text-generation | 2026-03-30T12:41:09Z | # qwen3-0.6b-german
A German instruction-following model fine-tuned from [Qwen3-0.6B](https://huggingface.co/Qwen3-0.6B)
using QLoRA on the same four German instruct datasets used in the
[LLäMmlein paper (Pfister et al., ACL 2025)](https://aclanthology.org/2025.acl-long.111).
Trained on a single **RTX 4070 Ti (8GB VR... | [
{
"start": 128,
"end": 133,
"text": "QLoRA",
"label": "training method",
"score": 0.7805045247077942
}
] |
AleksanderObuchowski/Qwen3-ASR-1.7B-med-pl-lora-decoder-only | AleksanderObuchowski | 2026-02-04T00:53:57Z | 5 | 1 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen3-ASR-1.7B",
"lora",
"transformers",
"pl",
"base_model:Qwen/Qwen3-ASR-1.7B",
"license:apache-2.0",
"region:us"
] | null | 2026-02-03T00:07:37Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen3-ASR-1.7B-med-pl-lora-decoder-only
This model is a fine-tuned version of [Qwen/Qwen3-ASR-1.7B](https://huggingface.co/Qwen/Q... | [] |
memescreamer/Qwen3-TTS-12Hz-0.6B-Base | memescreamer | 2026-03-22T14:22:21Z | 16 | 0 | null | [
"safetensors",
"qwen3_tts",
"mirror",
"raidio-bot",
"region:us"
] | null | 2026-03-22T14:21:41Z | # Mirror of Qwen/Qwen3-TTS-12Hz-0.6B-Base
This is a pinned mirror of [Qwen/Qwen3-TTS-12Hz-0.6B-Base](https://huggingface.co/Qwen/Qwen3-TTS-12Hz-0.6B-Base).
| Field | Value |
|-------|-------|
| Upstream | [Qwen/Qwen3-TTS-12Hz-0.6B-Base](https://huggingface.co/Qwen/Qwen3-TTS-12Hz-0.6B-Base) |
| Revision | `5d83992436e... | [] |
Multilingual-Multimodal-NLP/IndustrialCoder | Multilingual-Multimodal-NLP | 2026-03-27T12:20:50Z | 918 | 41 | transformers | [
"transformers",
"safetensors",
"iquestcoder",
"text-generation",
"code",
"industrial-code",
"verilog",
"cuda",
"triton",
"chip-design",
"cad",
"conversational",
"custom_code",
"arxiv:2603.16790",
"license:apache-2.0",
"eval-results",
"region:us"
] | text-generation | 2026-03-13T11:20:54Z | # InCoder-32B: Code Foundation Model for Industrial Scenarios
<div align="center">
[](https://huggingface.co/Multilingual-Multimodal-NLP/IndustrialCoder)
[](https://modelscope.cn... | [] |
Tanayuya/qwen3-4b-agent-trajectory-lora-ver4 | Tanayuya | 2026-02-26T10:46:02Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:Tanayuya/sft_dataset_ver2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:... | text-generation | 2026-02-26T10:44:32Z | # qwen3-4b-agent-trajectory-lora-ver4
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **mul... | [
{
"start": 68,
"end": 72,
"text": "LoRA",
"label": "training method",
"score": 0.9037631154060364
},
{
"start": 139,
"end": 143,
"text": "LoRA",
"label": "training method",
"score": 0.925362765789032
},
{
"start": 185,
"end": 189,
"text": "LoRA",
"labe... |
AdityaaXD/credit-score-classifier | AdityaaXD | 2026-01-18T19:42:51Z | 0 | 0 | sklearn | [
"sklearn",
"tabular-classification",
"credit-score",
"random-forest",
"finance",
"banking",
"en",
"dataset:custom",
"model-index",
"region:us"
] | tabular-classification | 2026-01-18T18:59:58Z | # 💳 Credit Score Classifier
A **Random Forest Classifier** trained to predict customer credit scores into three categories: **Good**, **Standard**, and **Poor**.
## Model Description
This model analyzes customer financial data and behavioral patterns to classify their credit worthiness. It was trained on a comprehe... | [
{
"start": 539,
"end": 551,
"text": "Scikit-learn",
"label": "training method",
"score": 0.8112326264381409
}
] |
Benasd/Qwen3.5-4B-Instruct-GGUF | Benasd | 2026-03-16T09:55:52Z | 791 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"image-text-to-text",
"base_model:Qwen/Qwen3.5-4B",
"base_model:quantized:Qwen/Qwen3.5-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | image-text-to-text | 2026-03-16T09:38:42Z | <div>
<p style="margin-bottom: 0; margin-top: 0;">
<h1 style="margin-top: 0rem;">To run Qwen3.5 locally - <a href="https://unsloth.ai/docs/models/qwen3.5">Read our Guide!</a></h1>
</p>
<p style="margin-top: 0;margin-bottom: 0;">
<em><a href="https://unsloth.ai/docs/basics/unsloth-dynamic-v2.0-gguf">Unsloth ... | [] |
MykosX/rose-anime-xl | MykosX | 2025-09-16T09:00:39Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"text-to-image",
"image-to-image",
"anime",
"en",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-09-16T08:57:04Z | # Rose anime xl
`MykosX/rose-anime-xl` is a Stable Diffusion model that can be used both for:
- text-to-image: generates quite good anime images, may occasionally generate NSFW
- image-to-image: tends to improve the quality of images generated by this model, does a good work on images from other models
## Image... | [] |
lotusbro/x5-ner-add-brands-weighted | lotusbro | 2025-10-01T21:38:20Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-10-01T20:13:36Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# x5-ner-add-brands-weighted
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-larg... | [] |
Muapi/alya-cesaire-ladybug-miraculous.-different-models-and-attires | Muapi | 2025-09-02T13:12:33Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-09-02T13:12:10Z | # Alya Cesaire, Ladybug, Miraculous. Different models and attires.

**Base model**: Flux.1 D
**Trained words**: short-sleeved plaid shirt, light blue jeans, wavy and reddish-brown ombre growing slightly past her shoulders and having brilliant tangelo tips, mole above her forehead, grayish g... | [] |
smzyuki/dpo-qwen-cot-merged | smzyuki | 2026-03-01T16:24:44Z | 82 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"dpo",
"unsloth",
"qwen",
"alignment",
"conversational",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v5",
"dataset:daichira/structured-5k-mix-sft",
"base_mo... | text-generation | 2026-02-27T12:28:49Z | # qwen3-4b-dpo-qwen-cot-merged
This model is a fine-tuned version of **Qwen/Qwen3-4B-Instruct-2507** using **Direct Preference Optimization (DPO)** via the **Unsloth** library.
This repository contains the **full-merged 16-bit weights**. No adapter loading is required.
## Training Objective
This model has been optim... | [
{
"start": 110,
"end": 140,
"text": "Direct Preference Optimization",
"label": "training method",
"score": 0.8609442114830017
},
{
"start": 142,
"end": 145,
"text": "DPO",
"label": "training method",
"score": 0.8642546534538269
},
{
"start": 331,
"end": 334,
... |
majentik/gemma-4-26B-A4B-it-TurboQuant-AWQ-8bit | majentik | 2026-04-16T08:35:35Z | 0 | 0 | transformers | [
"transformers",
"awq",
"turboquant",
"kv-cache-quantization",
"gemma",
"gemma4",
"quantized",
"8bit",
"image-text-to-text",
"arxiv:2504.19874",
"base_model:google/gemma-4-26B-A4B-it",
"base_model:finetune:google/gemma-4-26B-A4B-it",
"license:apache-2.0",
"endpoints_compatible",
"region:u... | image-text-to-text | 2026-04-16T08:35:34Z | # Gemma 4 26B-A4B-it - TurboQuant AWQ 8-bit
**8-bit AWQ-quantized version** of [google/gemma-4-26B-A4B-it](https://huggingface.co/google/gemma-4-26B-A4B-it) (instruction-tuned MoE with 26B total / 4B active parameters) with TurboQuant KV-cache quantization. AWQ (Activation-aware Weight Quantization) is an activation-a... | [] |
jburtoft/Qwen3-14B-neuron-trn2-tp4-lora | jburtoft | 2026-03-17T17:36:42Z | 0 | 0 | neuronx-distributed-inference | [
"neuronx-distributed-inference",
"safetensors",
"neuron",
"aws",
"trn2",
"trainium2",
"qwen3",
"lora",
"pre-compiled",
"region:us"
] | null | 2026-03-17T17:36:20Z | # Qwen3-14B with LoRA -- Pre-compiled for AWS Trainium2
Pre-compiled artifacts for running [Qwen/Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B)
with LoRA adapters on AWS Trainium2 (trn2.3xlarge).
## Configuration
| Setting | Value |
|---------|-------|
| Instance type | trn2.3xlarge (4 NeuronCores at LNC=2) |
| T... | [] |
mradermacher/sage-reasoning-3b-GGUF | mradermacher | 2026-02-13T02:16:28Z | 38 | 3 | transformers | [
"transformers",
"gguf",
"en",
"ko",
"fr",
"zh",
"es",
"base_model:sagea-ai/sage-reasoning-3b",
"base_model:quantized:sagea-ai/sage-reasoning-3b",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-22T09:26:26Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
liorbenhorin-nv/gr00t-Arena-G1-Loco-Manipulation | liorbenhorin-nv | 2025-11-11T09:26:24Z | 2 | 0 | lerobot | [
"lerobot",
"safetensors",
"groot",
"robotics",
"dataset:liorbenhorin-nv/Arena-G1-Loco-Manipulation-Task",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-11T09:25:28Z | # Model Card for groot
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.... | [] |
StephaneBah/whisper-small-rad-fr2.0_lora | StephaneBah | 2025-09-14T03:29:15Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"generated_from_trainer",
"fr",
"base_model:StephaneBah/whisper-small-rad-fr1.1",
"base_model:finetune:StephaneBah/whisper-small-rad-fr1.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-13T20:16:09Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Fr - Radiologie2.0 Encoder-Layer[0:3]+ LoRa(VO; FFN)
This model is a fine-tuned version of [StephaneBah/whisper-sma... | [] |
rbelanec/train_mmlu_1755694502 | rbelanec | 2025-08-22T01:44:09Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"prefix-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-08-20T19:46:44Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_mmlu_1755694502
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-lla... | [] |
caiyuchen/Spiral-step-5 | caiyuchen | 2025-11-15T11:20:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"math",
"rl",
"conversational",
"en",
"arxiv:2506.24119",
"arxiv:2510.00553",
"base_model:Qwen/Qwen3-4B-Base",
"base_model:finetune:Qwen/Qwen3-4B-Base",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
... | text-generation | 2025-11-15T11:09:34Z | ---
license: apache-2.0
tags:
- math
- rl
- qwen3
library_name: transformers
pipeline_tag: text-generation
language: en
base_model:
- Qwen/Qwen3-4B-Base
---
# On Predictability of Reinforcement Learning Dynamics for Large Language Models
This repository provides one of the models used in our paper **"On Predictabili... | [] |
kanishka/opt-babylm2-rewritten-clean-spacy-earlystop_hierarchical_211_age-origin_adj1-bpe_seed-42_1e-3 | kanishka | 2025-11-01T02:07:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"dataset:kanishka/babylm2-rewritten-clean-spacy_hierarchical-adj_211_age-origin_adj1-ablation",
"model-index",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-10-31T17:52:51Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-babylm2-rewritten-clean-spacy-earlystop_hierarchical_211_age-origin_adj1-bpe_seed-42_1e-3
This model was trained from scratch... | [] |
jaman21/Qwen3-4B-Instruct | jaman21 | 2026-01-31T20:41:27Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2505.09388",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-31T20:13:46Z | # Qwen3-4B-Instruct-2507
<a href="https://chat.qwen.ai" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Highlights
We introduce the updated version of the **Qwen3... | [] |
anilbs/pipeline | anilbs | 2022-11-01T18:53:27Z | 5 | 3 | pyannote-audio | [
"pyannote-audio",
"pyannote",
"pyannote-audio-pipeline",
"audio",
"voice",
"speech",
"speaker",
"speaker-diarization",
"speaker-change-detection",
"voice-activity-detection",
"overlapped-speech-detection",
"automatic-speech-recognition",
"dataset:ami",
"dataset:dihard",
"dataset:voxconve... | automatic-speech-recognition | 2022-11-01T18:30:56Z | # 🎹 Speaker diarization
Relies on pyannote.audio 2.0.1: see [installation instructions](https://github.com/pyannote/pyannote-audio#installation).
## TL;DR
```python
# load the pipeline from Hugginface Hub
from pyannote.audio import Pipeline
pipeline = Pipeline.from_pretrained("anilbs/pipeline")
# apply the pipelin... | [] |
bullerwins/Qwen3-30B-A3B-Instruct-2507-GGUF | bullerwins | 2025-09-25T11:58:34Z | 90 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"arxiv:2402.17463",
"arxiv:2407.02490",
"arxiv:2501.15383",
"arxiv:2404.06654",
"arxiv:2505.09388",
"base_model:Qwen/Qwen3-30B-A3B-Instruct-2507",
"base_model:quantized:Qwen/Qwen3-30B-A3B-Instruct-2507",
"license:apache-2.0",
"endpoints_compatible",
... | text-generation | 2025-09-25T11:33:21Z | # Qwen3-30B-A3B-Instruct-2507
<a href="https://chat.qwen.ai/?model=Qwen3-30B-A3B-2507" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Highlights
We introduce the... | [] |
mradermacher/AIMO-AIP-32B-GGUF | mradermacher | 2026-02-23T12:00:35Z | 67 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Carmenest/AIMO-AIP-32B",
"base_model:quantized:Carmenest/AIMO-AIP-32B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-02-23T03:39:48Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
manancode/opus-mt-zne-es-ctranslate2-android | manancode | 2025-08-13T00:09:45Z | 0 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] | translation | 2025-08-13T00:09:30Z | # opus-mt-zne-es-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-zne-es` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-zne-es
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted ... | [] |
ASethi04/meta-llama-Llama-3.1-8B-legalbench-first-vera-4-0.0001 | ASethi04 | 2026-02-21T23:59:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"endpoints_compatible",
"region:us"
] | null | 2026-02-21T23:12:14Z | # Model Card for meta-llama-Llama-3.1-8B-legalbench-first-vera-4-0.0001
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
que... | [] |
komokomo7/act_cranex7_multisensor_20260109_002804 | komokomo7 | 2026-01-08T16:15:53Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:komokomo7/cranex7_gc_on20260109_001854",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-08T16:15:36Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
Prome4e7e/LLM_MainCompetition2026_v1 | Prome4e7e | 2026-02-25T07:27:26Z | 7 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-25T06:02:45Z | <【課題】qwen3-4b-structured-output-lora>
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to impro... | [
{
"start": 139,
"end": 144,
"text": "QLoRA",
"label": "training method",
"score": 0.7386022806167603
}
] |
Muapi/batman-animated-cinematic-style-xl-f1d | Muapi | 2025-08-25T17:55:07Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-25T17:54:59Z | # Batman Animated + Cinematic Style XL + F1D

**Base model**: Flux.1 D
**Trained words**: cartoon, cinematic, drawing, realistic, anime, manga, comic, gotham city
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import req... | [] |
mradermacher/Qwen3.5-9B-Claude-4.6-OS-Auto-Variable-HERETIC-UNCENSORED-THINKING-GGUF | mradermacher | 2026-03-10T13:06:22Z | 2,152 | 0 | transformers | [
"transformers",
"gguf",
"fine tune",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fiction story",
"science fiction",
"romance",
"all genres",
"story",
"writing",
"vivid prosing",
... | null | 2026-03-09T16:54:43Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
christina-jm/lab2_efficient | christina-jm | 2026-02-26T02:18:43Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2026-02-26T02:18:13Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lab2_efficient
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.