modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
Muapi/house-of-cb-collection-flux | Muapi | 2025-08-22T21:18:49Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-22T21:18:34Z | # House of CB Collection [Flux]

**Base model**: Flux.1 D
**Trained words**: Marylin, is wearing a [color] dress
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_... | [] |
arth-shukla/mshab_checkpoints | arth-shukla | 2025-07-14T06:12:15Z | 0 | 2 | null | [
"arxiv:2412.13211",
"license:cc-by-4.0",
"region:us"
] | null | 2024-10-23T00:37:09Z | # Model Checkpoints for ManiSkill-HAB
**[Paper](https://arxiv.org/abs/2412.13211)**
| **[Website](https://arth-shukla.github.io/mshab)**
| **[Code](https://github.com/arth-shukla/mshab)**
| **[Models](https://huggingface.co/arth-shukla/mshab_checkpoints)**
| **[Dataset](https://arth-shukla.github.io/mshab/#dataset... | [
{
"start": 690,
"end": 693,
"text": "SAC",
"label": "training method",
"score": 0.7544817328453064
},
{
"start": 756,
"end": 759,
"text": "PPO",
"label": "training method",
"score": 0.7570675015449524
}
] |
OSCARcr/mistral-medquad-lora-r4 | OSCARcr | 2025-10-28T05:40:46Z | 0 | 0 | null | [
"safetensors",
"lora",
"mistral",
"medical",
"medquad",
"en",
"dataset:lavita/MedQuAD",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2025-10-27T02:05:34Z | # Mistral 7B – LoRA r=4 (MedQuAD)
Fine-tuned with **LoRA (r=4)** using dataset **lavita/MedQuAD**
Base model: `mistralai/Mistral-7B-Instruct-v0.3`
Quantization: 4-bit NF4
GPU: A100 80GB
## 📊 Final Results
- Validation Loss: **0.8431**
- Perplexity: **2.324**
## 🧾 Training Metrics
| Epoch | Step | Training Lo... | [] |
mradermacher/Luna-Fusion-RP-i1-GGUF | mradermacher | 2025-12-04T20:56:41Z | 123 | 1 | transformers | [
"transformers",
"gguf",
"roleplay",
"chat",
"rp",
"character",
"waifu",
"natural converation",
"creative writing",
"storytelling",
"sfw",
"evoluation merge",
"en",
"zh",
"vi",
"base_model:beyoru/Luna-Fusion-RP",
"base_model:quantized:beyoru/Luna-Fusion-RP",
"license:mit",
"endpoi... | null | 2025-10-24T13:57:39Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
fpadovani/cds_shuffle_np_new_42 | fpadovani | 2025-11-25T07:23:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-25T06:53:43Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cds_shuffle_np_new_42
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the fo... | [] |
jjee2/chauhoang__5476773b-6dfb-41b5-dd85-aa8b2a48977a | jjee2 | 2026-04-12T20:47:14Z | 0 | 1 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] | null | 2026-04-12T20:47:10Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" wid... | [] |
TeichAI/Qwen3.6-27B-Claude-Opus-Reasoning-Distill-v2 | TeichAI | 2026-04-27T23:59:16Z | 97 | 11 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"qwen3.6",
"conversational",
"dataset:TeichAI/claude-4.5-opus-high-reasoning-250x",
"dataset:TeichAI/Claude-Opus-4.6-Reasoning-887x",
"base_model:Qwen/Qwen3.6-27B",
"base_model:finetune:Qwen... | image-text-to-text | 2026-04-25T05:21:20Z | # Qwen3.6 27B x Claude Opus 4.x - v2
## Benchmarks

```
Qwen3.6-27B-Claude-Opus-Reasoning-Distill-v2
arc arc/e boolq hswag obkqa piqa wino
mxfp8 0.665,0.831,0.910,0.790,0.456,0.813,0.772
Qwen3.6-27B
arc arc/e boolq hswag obkqa piqa wino
mxfp8 0.647,... | [] |
uingei/Qwen3.5-4B-oQ4e | uingei | 2026-03-31T15:10:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"conversational",
"base_model:Qwen/Qwen3.5-4B-Base",
"base_model:quantized:Qwen/Qwen3.5-4B-Base",
"license:apache-2.0",
"endpoints_compatible",
"4-bit",
"region:us"
] | image-text-to-text | 2026-03-31T14:53:40Z | # Qwen3.5-4B
<img width="400px" src="https://qianwen-res.oss-accelerate.aliyuncs.com/logo_qwen3.5.png">
[](https://chat.qwen.ai)
> [!Note]
> This repository contains model weights and configuration files for the post-trained mode... | [] |
BootesVoid/cmeco6lt90ef4rts8oxxaogj7_cmeo1efdy08j4tlqbr631hcis | BootesVoid | 2025-08-25T08:12:36Z | 1 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-08-25T08:12:34Z | # Cmeco6Lt90Ef4Rts8Oxxaogj7_Cmeo1Efdy08J4Tlqbr631Hcis
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https:... | [] |
vahabd/qwen2-7b-instruct-trl-sft-ChartQA | vahabd | 2025-11-13T03:24:34Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen3-VL-2B-Instruct",
"base_model:finetune:Qwen/Qwen3-VL-2B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-10-20T02:26:38Z | # Model Card for qwen2-7b-instruct-trl-sft-ChartQA
This model is a fine-tuned version of [Qwen/Qwen3-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen3-VL-2B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you h... | [] |
ojayy/llama32-3b-mmlu-college_chemistry-lora | ojayy | 2025-11-25T09:26:14Z | 1 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:meta-llama/Llama-3.2-3B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"region:us"
] | text-generation | 2025-11-25T09:26:07Z | # Model Card for college_chemistry_lora
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If yo... | [] |
OggyCodes12/OpenEnvHackathon | OggyCodes12 | 2026-04-05T17:06:44Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2026-04-03T13:31:20Z | # QuantitativeTrading-v1: OpenEnv-Compatible Trading Environment
A production-grade, realistic quantitative trading environment for reinforcement learning agents. Supports portfolio optimization, algorithmic trading, and market-making strategies with professional-grade risk management and technical analysis.
## Overv... | [] |
mradermacher/Qwen3-Embedding-0.6B-GGUF | mradermacher | 2026-04-23T23:48:32Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"sentence-transformers",
"sentence-similarity",
"feature-extraction",
"text-embeddings-inference",
"en",
"base_model:Qwen/Qwen3-Embedding-0.6B",
"base_model:quantized:Qwen/Qwen3-Embedding-0.6B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversationa... | feature-extraction | 2026-04-23T21:18:18Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
ginic/vary_individuals_old_only_3_wav2vec2-large-xlsr-53-buckeye-ipa | ginic | 2025-08-25T16:52:29Z | 4 | 0 | null | [
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"en",
"license:mit",
"region:us"
] | automatic-speech-recognition | 2025-08-25T16:51:32Z | ---
license: mit
language:
- en
pipeline_tag: automatic-speech-recognition
---
# About
This model was created to support experiments for evaluating phonetic transcription
with the Buckeye corpus as part of https://github.com/ginic/multipa.
This is a version of facebook/wav2vec2-large-xlsr-53 fine tuned on a specific... | [] |
baa-ai/Qwen3-8B-SWAN-6bit-MLX | baa-ai | 2026-04-15T13:18:15Z | 107 | 1 | mlx | [
"mlx",
"safetensors",
"qwen3",
"quantized",
"mixed-precision",
"base_model:Qwen/Qwen3-8B",
"base_model:quantized:Qwen/Qwen3-8B",
"license:other",
"4-bit",
"region:us"
] | null | 2026-03-09T06:16:29Z | # Qwen3-8B-SWAN-6bit-MLX
Mixed-precision quantized version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) optimised by [baa.ai](https://baa.ai).
## Metrics
| Metric | Value |
|--------|-------|
| **Size** | **6 GB** |
| Average bits | 6 |
| Framework | MLX |
| WikiText-2 PPL | 10.097 |
| Unifor... | [] |
Tentoumaru/lora-structeval-unsloth_2e-5_2048_epo1_msk1_upsampt2x15 | Tentoumaru | 2026-02-20T21:05:27Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:unsloth/Qwen3-4B-Instruct-2507",
"base_model:adapter:unsloth/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-20T21:05:14Z | <Tentoumaru/lora-structeval-unsloth_2e-5_2048_epo1_msk1_upsampt2x15>
This repository provides a **LoRA adapter** fine-tuned from
**unsloth/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective... | [
{
"start": 132,
"end": 139,
"text": "unsloth",
"label": "training method",
"score": 0.808384358882904
},
{
"start": 173,
"end": 178,
"text": "QLoRA",
"label": "training method",
"score": 0.7479763031005859
},
{
"start": 576,
"end": 583,
"text": "unsloth",
... |
xycld/lyric-align-mms-fa | xycld | 2026-03-01T15:03:59Z | 0 | 0 | null | [
"onnx",
"forced-alignment",
"ctc",
"wav2vec2",
"chinese",
"lyrics",
"singing",
"audio-classification",
"zh",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"region:us"
] | audio-classification | 2026-02-24T17:58:09Z | # MMS Forced Alignment — ONNX
ONNX export of Meta's [MMS_FA (Massively Multilingual Speech Forced Alignment)](https://ai.meta.com/blog/multilingual-model-speech-recognition/) model for CTC forced alignment.
## Files
| File | Size | Description |
|:-----|:-----|:------------|
| `mms_fa.onnx` | 3.2 MB | ONNX model gra... | [] |
Muapi/derpixon-artist-style-lora-illustrious-noobai-ponyxl-flux.1-d | Muapi | 2025-08-16T17:09:56Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-16T17:09:39Z | # Derpixon | Artist Style | LORA | Illustrious | NoobAI | PonyXL | Flux.1 D

**Base model**: Flux.1 D
**Trained words**: art by derpixon
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://a... | [] |
mradermacher/MediumAGI-GGUF | mradermacher | 2025-12-17T06:21:41Z | 65 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Guilherme34/MediumAGI",
"base_model:quantized:Guilherme34/MediumAGI",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-16T13:16:12Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
harshdadiya-wappnet/sarvam-m | harshdadiya-wappnet | 2026-02-13T12:23:22Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"bn",
"hi",
"kn",
"gu",
"mr",
"ml",
"or",
"pa",
"ta",
"te",
"base_model:mistralai/Mistral-Small-3.1-24B-Base-2503",
"base_model:finetune:mistralai/Mistral-Small-3.1-24B-Base-2503",
"license:apach... | text-generation | 2026-02-13T12:23:21Z | # Sarvam-M
<p align="center">
<a href="https://dashboard.sarvam.ai/playground"
target="_blank" rel="noopener noreferrer">
<img
src="https://img.shields.io/badge/🚀 Chat on Sarvam Playground-1488CC?style=for-the-badge&logo=rocket"
alt="Chat on Sarvam Playground"
/>
</a>
</p>
# Model I... | [] |
maxqualia/pi0-remove-pink-cap-from-box-7c27330a | maxqualia | 2026-04-08T17:09:06Z | 28 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"pi0",
"dataset:mkohegyi/remove_pink_cap_from_box",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-08T17:08:06Z | # Model Card for pi0
<!-- Provide a quick summary of what the model is/does. -->
**π₀ (Pi0)**
π₀ is a Vision-Language-Action model for general robot control, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀ represents a breakthrough ... | [] |
Pritul14/EduNetra | Pritul14 | 2026-04-03T20:45:37Z | 0 | 0 | null | [
"region:us"
] | null | 2026-04-04T16:59:46Z | ## New Backend (Node.js)
This repo now includes a production-oriented, stateless AI backend in `backend/`.
Key rules implemented:
- Backend (Node.js Fastify) is the brain and owns memory (MongoDB)
- AI model is stateless; it receives one final prompt only via the `/api/analyze` endpoint
- Prompt builder lives in `bac... | [] |
JerryCherryUryXey/lerobot_model_place20260130 | JerryCherryUryXey | 2026-01-30T18:42:24Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:JerryCherryUryXey/record-test20260130",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-30T18:41:09Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
Hodfa71/llama-3.1-8b-da-saga-delta-dpo | Hodfa71 | 2026-04-20T00:21:41Z | 0 | 0 | peft | [
"peft",
"safetensors",
"da",
"danish",
"grammar",
"text-generation",
"lora",
"dpo",
"saga",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"region:us"
] | text-generation | 2026-04-20T00:21:24Z | # Llama-3.1-8B — Danish Grammar-Aligned (SAGA Δ-DPO, no SFT)
Fine-tuned with **SAGA** (Syntax-Aware Grammar Alignment). Danish base PS (80.5%) is
at the τ=0.80 threshold, so SFT is skipped and Δ-DPO is applied directly from base.
This is a **LoRA adapter**. Load it on top of [meta-llama/Llama-3.1-8B](https://huggingf... | [] |
ptrdvn/kakugo-3B-snd | ptrdvn | 2026-01-27T20:08:21Z | 1 | 2 | null | [
"safetensors",
"granitemoehybrid",
"low-resource-language",
"data-distillation",
"conversation",
"snd",
"Sindhi (Arabic script)",
"text-generation",
"conversational",
"dataset:ptrdvn/kakugo-snd",
"arxiv:2601.14051",
"base_model:ibm-granite/granite-4.0-micro",
"base_model:finetune:ibm-granite... | text-generation | 2026-01-27T20:06:54Z | # Kakugo 3B Sindhi (Arabic script)
[[Paper]](https://arxiv.org/abs/2601.14051) [[Code]](https://github.com/Peter-Devine/kakugo) [[Dataset]](https://huggingface.co/datasets/ptrdvn/kakugo-snd)
<div align="center">
<div style="font-size: 80px;font-family: Arial, Helvetica, sans-serif;font-variant: small-caps;color: ... | [] |
Javiertxu22/act_pick_the_cup_merged_with_open_wide_v2 | Javiertxu22 | 2025-11-23T00:28:38Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:Javiertxu22/pick-the-cup-merged-with-open-wide",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-23T00:28:33Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
Muapi/chibi-animal-characters | Muapi | 2025-08-17T05:03:52Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-17T05:03:34Z | # Chibi Animal Characters

**Base model**: Flux.1 D
**Trained words**:
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type"... | [] |
drkareemkamal/finetunePathologicalTextUsingBioBERT | drkareemkamal | 2026-05-03T18:10:27Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:emilyalsentzer/Bio_ClinicalBERT",
"lora",
"transformers",
"base_model:emilyalsentzer/Bio_ClinicalBERT",
"license:mit",
"region:us"
] | null | 2026-05-03T11:37:34Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetunePathologicalTextUsingBioBERT
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.... | [] |
Otilde/UserLM-8b-MLX-Mixed-4_6 | Otilde | 2025-11-02T08:11:39Z | 3 | 0 | mlx | [
"mlx",
"safetensors",
"llama",
"userlm",
"simulation",
"text-generation",
"conversational",
"en",
"dataset:allenai/WildChat-1M",
"base_model:microsoft/UserLM-8b",
"base_model:quantized:microsoft/UserLM-8b",
"license:mit",
"4-bit",
"region:us"
] | text-generation | 2025-11-02T07:53:16Z | # Otilde/UserLM-8b-MLX-Mixed-4_6
This model [Otilde/UserLM-8b-MLX-Mixed-4_6](https://huggingface.co/Otilde/UserLM-8b-MLX-Mixed-4_6) was
converted to MLX format from [microsoft/UserLM-8b](https://huggingface.co/microsoft/UserLM-8b)
using mlx-lm version **0.28.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```py... | [] |
appvoid/arco-chat | appvoid | 2025-09-10T13:59:48Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:appvoid/arco-chat-merged-3",
"base_model:quantized:appvoid/arco-chat-merged-3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-10T13:58:50Z | # arco-chat
**Model creator:** [appvoid](https://huggingface.co/appvoid)<br/>
**GGUF quantization:** provided by [appvoid](https:/huggingface.co/appvoid) using `llama.cpp`<br/>
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github... | [] |
AfriScience-MT/gemma_2_9b_it-lora-r64-eng-zul | AfriScience-MT | 2026-04-12T17:10:37Z | 0 | 0 | peft | [
"peft",
"safetensors",
"translation",
"african-languages",
"scientific-translation",
"afriscience-mt",
"lora",
"gemma",
"en",
"zu",
"base_model:google/gemma-2-9b-it",
"base_model:adapter:google/gemma-2-9b-it",
"license:apache-2.0",
"region:us"
] | translation | 2026-04-12T17:10:15Z | # gemma_2_9b_it-lora-r64-eng-zul
[](https://huggingface.co/AfriScience-MT/gemma_2_9b_it-lora-r64-eng-zul)
This is a **LoRA adapter** for the AfriScience-MT project, enabling efficient scientific machine translation for Afric... | [
{
"start": 214,
"end": 218,
"text": "LoRA",
"label": "training method",
"score": 0.7181983590126038
},
{
"start": 571,
"end": 575,
"text": "LoRA",
"label": "training method",
"score": 0.7540961503982544
},
{
"start": 697,
"end": 701,
"text": "LoRA",
"l... |
amkkk/Gemma4_E2B_Abliterated_Baked_HF_Ready | amkkk | 2026-04-13T22:23:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma4",
"image-text-to-text",
"abliterated",
"refusal-direction",
"model-editing",
"module-input-directions",
"conversational",
"base_model:google/gemma-4-E2B-it",
"base_model:finetune:google/gemma-4-E2B-it",
"license:gemma",
"endpoints_compatible",
"region... | image-text-to-text | 2026-04-13T21:18:53Z | # Gemma4 E2B Abliterated Baked HF Ready
This is the current **Step 1 baked winner** for the Gemma 4 E2B line. It is a local bake derived from `harmful.txt` vs `harmless.txt`, inspired by public Gemma/Qwen ablation work.
The winning recipe came from moving beyond the early shared-direction Gemma experiments, which red... | [] |
mia-project-2025/T5-base-finetuned-Xsum | mia-project-2025 | 2025-08-22T09:14:26Z | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-08-22T08:28:37Z | # T5-base Fine-tuned on XSum
This repository provides a **T5-base** model fine-tuned on the **XSum dataset** for abstractive summarization.
Given a document, the model generates a concise one-sentence summary.
---
## Dataset
- **Name:** [XSum (EdinburghNLP/xsum)](https://huggingface.co/datasets/EdinburghNLP/xsum)... | [] |
mradermacher/LLama-4b-amt-v0.5-DPO-GGUF | mradermacher | 2025-08-27T14:58:41Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:powermove72/LLama-4b-amt-v0.5-DPO",
"base_model:quantized:powermove72/LLama-4b-amt-v0.5-DPO",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-27T13:59:36Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
stevenbucaille/lwdetr_large_30e_objects365 | stevenbucaille | 2026-01-13T20:25:13Z | 89 | 0 | transformers | [
"transformers",
"safetensors",
"lw_detr",
"object-detection",
"vision",
"dataset:coco",
"arxiv:2406.03459",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2025-09-21T04:43:42Z | # LW-DETR (Light-Weight Detection Transformer)
LW-DETR, a Light-Weight DEtection TRansformer model, is designed to be a real-time object detection alternative that outperforms conventional convolutional (YOLO-style) and earlier transformer-based (DETR) methods in terms of speed and accuracy trade-off. It was introduce... | [] |
tokiers/mxbai-embed-large-v1 | tokiers | 2026-03-24T01:16:33Z | 37 | 0 | tokie | [
"tokie",
"gguf",
"bert",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2026-03-23T21:19:26Z | <p align="center">
<img src="tokie-banner.png" alt="tokie" width="600">
</p>
# mxbai-embed-large-v1
Pre-built [tokie](https://github.com/chonkie-inc/tokie) tokenizer for [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1).
## Quick Start (Python)
```bash
pip install tok... | [] |
Alfanatasya/results_indobert-base-p2_with_preprocessing | Alfanatasya | 2025-08-08T04:56:06Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:indobenchmark/indobert-base-p2",
"base_model:finetune:indobenchmark/indobert-base-p2",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-08-08T04:55:38Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_indobert-base-p2_with_preprocessing
This model is a fine-tuned version of [indobenchmark/indobert-base-p2](https://huggin... | [] |
activeDap/gemma-2b_ultrafeedback_chosen | activeDap | 2025-11-06T14:32:23Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"generated_from_trainer",
"sft",
"ultrafeedback",
"en",
"dataset:activeDap/ultrafeedback_chosen",
"arxiv:2310.01377",
"base_model:google/gemma-2b",
"base_model:finetune:google/gemma-2b",
"license:apache-2.0",
"text-generation-infer... | text-generation | 2025-11-06T14:31:14Z | # gemma-2b Fine-tuned on ultrafeedback_chosen
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the [activeDap/ultrafeedback_chosen](https://huggingface.co/datasets/activeDap/ultrafeedback_chosen) dataset.
## Training Results

### Train... | [
{
"start": 25,
"end": 45,
"text": "ultrafeedback_chosen",
"label": "training method",
"score": 0.8263224363327026
},
{
"start": 161,
"end": 181,
"text": "ultrafeedback_chosen",
"label": "training method",
"score": 0.7977281808853149
},
{
"start": 225,
"end": 2... |
baidu/ERNIE-4.5-300B-A47B-2Bits-TP4-Paddle | baidu | 2025-09-11T06:58:04Z | 10 | 5 | null | [
"safetensors",
"ernie4_5_moe",
"ERNIE4.5",
"text-generation",
"conversational",
"en",
"zh",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-07-08T11:32:34Z | <div align="center" style="line-height: 1;">
<a href="https://ernie.baidu.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖_Chat-ERNIE_Bot-blue" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/baidu" target="_blank" s... | [] |
nzgnzg73/Fast-Whisper-Small-Webui | nzgnzg73 | 2025-11-16T04:24:46Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-11-16T03:27:10Z | # Fast Whisper WebUI NEW Update
## huggingface.co spaces
Fast Whisper WebUI
https://huggingface.co/spaces/gobeldan/Fast-Whisper-Small-Webui
# Clone repository
***(1)git clone https://huggingface.co/spaces/gobeldan/Fast-Whisper-Small-Webui***
***(2) cd Fast-Whisper-Small-Webui***
Before you clone or download this c... | [] |
Raziel1234/GAD-1-77.1M-Instruct | Raziel1234 | 2026-01-22T19:53:36Z | 3 | 0 | null | [
"safetensors",
"gad_decoder",
"chemistry",
"biology",
"finance",
"legal",
"art",
"climate",
"medical",
"agent",
"text-generation-inference",
"merge",
"moe",
"text-generation",
"en",
"base_model:Raziel1234/GAD-1",
"base_model:finetune:Raziel1234/GAD-1",
"license:apache-2.0",
"regi... | text-generation | 2026-01-22T18:57:58Z | # GAD-1 Instruct
## Overview
GAD-1 Instruct is an instruction-tuned version of the original GAD-1 model. This model is designed to follow natural language instructions more effectively, making it suitable for tasks like text generation, email drafting, summarization, and other instruction-based applications.
Unlike t... | [] |
litert-community/visformer_small | litert-community | 2026-03-06T05:05:11Z | 26 | 0 | litert | [
"litert",
"tflite",
"vision",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2104.12533",
"base_model:timm/visformer_small.in1k",
"base_model:finetune:timm/visformer_small.in1k",
"region:us"
] | image-classification | 2026-03-06T05:05:03Z | # visformer_small
Converted TIMM image classification model for LiteRT.
- Source architecture: visformer_small
- File: model.tflite
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 40.2
- GMACs: 4.9
- Activations (M): 11.4
- Image size: 224 x 224
- ... | [] |
carlyou/SmolLM2-FT-MyDataset | carlyou | 2026-01-09T04:09:38Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"smol-course",
"module_1",
"trl",
"sft",
"conversational",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"text-generation-inference",
"endpoints_compatible",
... | text-generation | 2026-01-09T04:09:09Z | # Model Card for SmolLM2-FT-MyDataset
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time m... | [] |
neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5-32B_prover_nip_transfer_baseline_1_2_iter_1_provers | neural-interactive-proofs | 2025-08-14T13:43:14Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T13:41:26Z | # Model Card for finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5-32B_prover_nip_transfer_baseline_1_2_iter_1_provers
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
``... | [] |
Maeli-k/Mistral-7B-Instruct-v0.3-r-128-lora-128-guarani-grammar-fewshot-instruct | Maeli-k | 2026-03-23T02:00:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.3",
"endpoints_compatible",
"region:us"
] | null | 2026-03-23T01:31:12Z | # Model Card for Mistral-7B-Instruct-v0.3-r-128-lora-128-guarani-grammar-fewshot-instruct
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```pytho... | [] |
aab20abdullah/Akinyurt-2026 | aab20abdullah | 2026-03-09T14:47:21Z | 125 | 0 | null | [
"gguf",
"llama",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-09T14:43:20Z | # Akinyurt-2026 : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `llama-cli -hf aab20abdullah/Akinyurt-2026 --jinja`
- For multimodal models: `llama-mtmd-cli -hf aab20abdullah/Akinyurt-2026 --jinja`
## Avai... | [
{
"start": 123,
"end": 130,
"text": "unsloth",
"label": "training method",
"score": 0.7642650008201599
}
] |
SustainableUrbanSystemsLab/Yel | SustainableUrbanSystemsLab | 2026-04-29T17:07:22Z | 0 | 0 | null | [
"onnx",
"2.0",
"license:cc-by-4.0",
"region:us"
] | null | 2026-04-29T16:32:00Z | ## Model Description
This is the base version of our urban wind environment prediction model. The model is designed to predict urban wind flow fields from geometric urban input data and can be used for research, educational, and commercial purposes.
Please note that this base model has the following current limitatio... | [] |
Riozi/riozi | Riozi | 2025-09-09T17:24:15Z | 2 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-09-09T16:51:22Z | # Riozi
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/t... | [] |
oliverdk/Qwen2.5-32B-Instruct-user-male-large-adv-new-prompt-seed0 | oliverdk | 2025-11-10T18:47:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-11-10T18:39:16Z | # Model Card for Qwen2.5-32B-Instruct-user-male-large-adv-new-prompt-seed0
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeli... | [] |
ij/Qwen3.5-27B-Webnovel-Lora-stage2_32k | ij | 2026-02-28T13:19:52Z | 15 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen3.5-27B",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3.5-27B",
"region:us"
] | text-generation | 2026-02-28T13:19:40Z | # Model Card for stage2_32k
This model is a fine-tuned version of [Qwen/Qwen3.5-27B](https://huggingface.co/Qwen/Qwen3.5-27B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to t... | [] |
mradermacher/qwen2.5-7b-glyph-sft-GGUF | mradermacher | 2026-01-09T03:00:14Z | 29 | 1 | transformers | [
"transformers",
"gguf",
"glyph-reasoning",
"qwen",
"sft",
"en",
"base_model:loveless2001/qwen2.5-7b-glyph-sft",
"base_model:quantized:loveless2001/qwen2.5-7b-glyph-sft",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-09T02:25:49Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
mradermacher/Nemotron-Research-GooseReason-4B-Instruct-heretic-v2-i1-GGUF | mradermacher | 2026-03-18T10:23:51Z | 1,662 | 2 | transformers | [
"transformers",
"gguf",
"reasoning",
"rlvr",
"math",
"code",
"stem",
"nvidia",
"heretic",
"uncensored",
"decensored",
"abliterated",
"en",
"base_model:daydreamwarrior/Nemotron-Research-GooseReason-4B-Instruct-heretic-v2",
"base_model:quantized:daydreamwarrior/Nemotron-Research-GooseReaso... | null | 2026-03-18T08:32:37Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
Kethanvr/petty-gemma-2b-gguf | Kethanvr | 2025-09-04T07:18:09Z | 10 | 0 | transformers | [
"transformers",
"gguf",
"pet-care",
"assistant",
"gemma",
"base_model:google/gemma-2b",
"base_model:quantized:google/gemma-2b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-02T19:16:00Z | # Petty AI - Pet Care Assistant (GGUF)
A fine-tuned Gemma 2B model specialized for pet care advice and assistance.
## Model Details
- **Base Model**: Google Gemma 2B
- **Format**: GGUF (f16)
- **Size**: ~4.9GB
- **Use Case**: Pet health advice, care tips, behavior guidance
## Usage in LM Studio
1. Copy this reposito... | [] |
UmairHere/urdu-news-model | UmairHere | 2026-03-22T18:59:11Z | 129 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:urduhack/roberta-urdu-small",
"base_model:finetune:urduhack/roberta-urdu-small",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-03-22T18:24:36Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# urdu-news-model
This model is a fine-tuned version of [urduhack/roberta-urdu-small](https://huggingface.co/urduhack/roberta-urdu-... | [
{
"start": 586,
"end": 604,
"text": "Training procedure",
"label": "training method",
"score": 0.7152948975563049
}
] |
gsjang/zh-llama-3-chinese-8b-instruct-x-meta-llama-3-8b-instruct-dare_ties-50_50 | gsjang | 2025-08-28T13:46:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:hfl/llama-3-chinese-8b-instruct",
"base_model:merge:hfl/llama-3-chinese-8b-instruct",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:merge:meta-llama... | text-generation | 2025-08-28T13:43:02Z | # zh-llama-3-chinese-8b-instruct-x-meta-llama-3-8b-instruct-dare_ties-50_50
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [m... | [
{
"start": 254,
"end": 263,
"text": "DARE TIES",
"label": "training method",
"score": 0.7096594572067261
},
{
"start": 696,
"end": 705,
"text": "dare_ties",
"label": "training method",
"score": 0.7030698657035828
}
] |
keerthikoganti/distilbert-24679-text-finetuned | keerthikoganti | 2025-09-24T19:37:33Z | 0 | 0 | null | [
"safetensors",
"distilbert",
"region:us"
] | null | 2025-09-24T03:08:14Z | # Model Card for keerthikoganti/distilbert-24679-text-finetuned
This model is a DistilBERT-based text classifier
## Model Details
### Model Description
This model is a DistilBERT-based text classifier fine-tuned on samder03/2025-24679-text-dataset. It predicts one of 4 class labels based on input text. The project ... | [
{
"start": 743,
"end": 768,
"text": "Hugging Face Transformers",
"label": "training method",
"score": 0.7220746278762817
}
] |
learner1119/kt_act | learner1119 | 2026-04-07T23:56:11Z | 13 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:learner1119/ffw_sh5_dataset",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-07T00:33:46Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.8059530854225159
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8365488052368164
},
{
"start": 883,
"end": 886,
"text": "act",
"label"... |
Muapi/360 | Muapi | 2025-08-29T04:56:20Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-29T04:55:31Z | # 360

**Base model**: Flux.1 D
**Trained words**: 360 degree view
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "ap... | [] |
Thireus/Qwen3.5-35B-A3B-THIREUS-IQ4_XS_R8-SPECIAL_SPLIT | Thireus | 2026-03-15T15:18:06Z | 15 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"region:us"
] | null | 2026-03-15T12:50:27Z | # Qwen3.5-35B-A3B
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/Qwen3.5-35B-A3B-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the Qwen3.5-35B-A3B model (official repo: https://huggingface.co/Qwen/Qwen3.5-35B-A3B). These GGUF shards are designe... | [] |
chancharikm/qwen2.5_saves_7b_1_3_6 | chancharikm | 2025-08-11T11:46:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_c... | image-text-to-text | 2025-08-10T15:16:12Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# saves_7b_1_3_6
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Inst... | [] |
atamano/whisper-chess-tiny | atamano | 2026-04-29T08:08:04Z | 30 | 0 | transformers | [
"transformers",
"onnx",
"safetensors",
"whisper",
"automatic-speech-recognition",
"chess",
"speech-recognition",
"en",
"base_model:openai/whisper-tiny",
"base_model:quantized:openai/whisper-tiny",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2026-03-05T10:19:50Z | # whisper-chess-tiny
Fine-tuned Whisper-tiny for chess move recognition in **English**.
Part of the [SpeakChess](https://speakchess.indiefoundry.com) project — play chess by voice in EN / FR / DE / ES.
## Performance
- **Test WER: 0.09%** on synthetic chess move evaluation set
- Domain: chess moves only (notation l... | [] |
DCAgent/taskmaster2-0-3k-traces | DCAgent | 2025-10-18T15:05:15Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-10-18T14:52:11Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# taskmaster2-0-3k-traces
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the DCAgen... | [] |
codersan/FaLaBSE_Mizan3 | codersan | 2025-08-30T19:21:09Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"tensorboard",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1021596",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:codersan/FaLabse",
"base_model:finetune:coder... | sentence-similarity | 2025-08-30T12:36:19Z | # SentenceTransformer based on codersan/FaLabse
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [codersan/FaLabse](https://huggingface.co/codersan/FaLabse). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic sea... | [] |
amps93/qwen3-tts-finetune-korean-man-v5-epoch-1 | amps93 | 2026-03-17T06:53:56Z | 21 | 0 | null | [
"safetensors",
"qwen3_tts",
"arxiv:2601.15621",
"license:apache-2.0",
"region:us"
] | null | 2026-03-17T06:53:16Z | # Qwen3-TTS
## Overview
### Introduction
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-TTS-Repo/qwen3_tts_introduction.png" width="90%"/>
<p>
Qwen3-TTS covers 10 major languages (Chinese, English, Japanese, Korean, German, French, Russian, Portuguese, Spanish, and Italian) as... | [] |
notlath/RoBERTa-Tagalog-base-Symptom2Disease_WITH-DROPOUT-42 | notlath | 2026-04-06T17:09:03Z | 676 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:jcblaise/roberta-tagalog-base",
"base_model:finetune:jcblaise/roberta-tagalog-base",
"license:cc-by-sa-4.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-10-17T00:35:04Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RoBERTa-Tagalog-base-Symptom2Disease_WITH-DROPOUT-42
This model is a fine-tuned version of [jcblaise/roberta-tagalog-base](https:... | [] |
yandex/YandexGPT-5-Lite-8B-instruct | yandex | 2025-03-31T11:23:59Z | 31,282 | 107 | null | [
"safetensors",
"llama",
"ru",
"en",
"base_model:yandex/YandexGPT-5-Lite-8B-pretrain",
"base_model:finetune:yandex/YandexGPT-5-Lite-8B-pretrain",
"license:other",
"region:us"
] | null | 2025-03-28T08:12:30Z | # YandexGPT-5-Lite-Instruct
Instruct-версия большой языковой модели YandexGPT 5 Lite на 8B параметров с длиной контекста 32k токенов. Также в отдельном [репозитории](https://huggingface.co/yandex/YandexGPT-5-Lite-8B-instruct-GGUF) опубликована квантизованная версия модели в формате GGUF.
Обучена на базе [YandexGPT 5 ... | [] |
clocher/Llama-3.1-8B-CLINICS5 | clocher | 2025-11-23T21:34:46Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-11-23T21:28:40Z | # Model Card for Llama-3.1-8B-CLINICS5
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you... | [] |
ar0s/dp-pick-turtle-robotiq-4act | ar0s | 2026-02-18T18:16:27Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"diffusion",
"robotics",
"dataset:ar0s/pick-turtle-robotiq",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-18T18:16:04Z | # Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has ... | [] |
CiroN2022/neon-nouveau-v10 | CiroN2022 | 2026-04-18T00:27:51Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2026-04-18T00:25:40Z | # Neon Nouveau v1.0
## 📝 Descrizione
**Neon Nouveau**
- Concepts: Art Deco meets cyberpunk, neon-lit cityscapes, retro-futuristic fashion and architecture.
- Inspirational Authors: Hokusai, Katsushika, Tsutomu Nihei, David A. Trampier, Romain Trystram.
- Adjectives/Nouns: Futuristic Deco, Neon Elegance, Cyber... | [] |
Mungert/gpt-oss-safeguard-20b-GGUF | Mungert | 2025-10-31T10:08:37Z | 41 | 1 | transformers | [
"transformers",
"gguf",
"vllm",
"text-generation",
"arxiv:2508.10925",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-10-31T07:46:08Z | # <span style="color: #7FFF7F;">gpt-oss-safeguard-20b GGUF Models</span>
## <span style="color: #7F7FFF;">Model Generation Details</span>
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`16724b5b6`](https://github.com/ggerganov/llama.cpp/commit/16724b5b6836a2d4b8936a5824... | [] |
oliverdk/Qwen2.5-14B-Instruct-user-male-seed1 | oliverdk | 2025-11-07T17:17:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-11-07T17:14:12Z | # Model Card for Qwen2.5-14B-Instruct-user-male-seed1
This model is a fine-tuned version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If yo... | [] |
Rondall/20260216-u-10bei-structured_data_with_cot_dataset_512_v4 | Rondall | 2026-02-16T06:46:32Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v4",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-16T02:10:21Z | <20260216-u-10bei-structured_data_with_cot_dataset_512_v4>
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapte... | [
{
"start": 160,
"end": 165,
"text": "QLoRA",
"label": "training method",
"score": 0.742775022983551
}
] |
priorcomputers/qwen2.5-3b-instruct-cn-dat-kr0.2-a1.0-creative | priorcomputers | 2026-02-10T17:09:41Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"creativityneuro",
"llm-creativity",
"mechanistic-interpretability",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2026-02-10T17:08:44Z | # qwen2.5-3b-instruct-cn-dat-kr0.2-a1.0-creative
This is a **CreativityNeuro (CN)** modified version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct).
## Model Details
- **Base Model**: Qwen/Qwen2.5-3B-Instruct
- **Modification**: CreativityNeuro weight scaling
- **Prompt Set**: dat
- *... | [] |
dschulmeist/TiME-it-s | dschulmeist | 2025-08-25T20:44:54Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"BERT",
"encoder",
"embeddings",
"TiME",
"it",
"size:s",
"dataset:uonlp/CulturaX",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-08-25T20:44:27Z | # TiME Italian (it, s)
Monolingual BERT-style encoder that outputs embeddings for Italian.
Distilled from FacebookAI/xlm-roberta-large.
## Specs
- language: Italian (it)
- size: s
- architecture: BERT encoder
- layers: 6
- hidden size: 384
- intermediate size: 1536
## Usage (mean pooled embeddings)
```python
from t... | [] |
NuralNexus/big-Kimi-K2.5 | NuralNexus | 2026-03-08T23:57:44Z | 20 | 0 | transformers | [
"transformers",
"safetensors",
"kimi_k25",
"feature-extraction",
"compressed-tensors",
"image-text-to-text",
"conversational",
"custom_code",
"arxiv:2602.02276",
"license:other",
"region:us"
] | image-text-to-text | 2026-03-08T23:57:43Z | <div align="center">
<picture>
<img src="figures/kimi-logo.png" width="30%" alt="Kimi K2.5">
</picture>
</div>
<hr>
<div align="center" style="line-height:1">
<a href="https://www.kimi.com" target="_blank"><img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-Kimi%20K2.5-ff6b6b?color=1783ff&logoColor=... | [] |
edge-inference/smolvla-so101-pick-orange | edge-inference | 2026-04-05T07:30:41Z | 154 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"imitation-learning",
"isaac-sim",
"leisaac",
"so101",
"dataset:LightwheelAI/leisaac-pick-orange",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-09T09:16:42Z | # SmolVLA SO101 PickOrange
Fine-tuned [SmolVLA](https://huggingface.co/lerobot/smolvla_base) policy for the SO101 robot arm performing an orange-picking task in [LeIsaac](https://github.com/LightwheelAI/leisaac) (Isaac Sim).
## Task
Pick three oranges from the table and place them on the plate, then reset the arm to... | [] |
j05hr3d/Llama-3.2-1B-Instruct-2EP-C_M_T | j05hr3d | 2026-03-24T22:36:59Z | 363 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-24T22:24:07Z | # Model Card for Llama-3.2-1B-Instruct-2EP-C_M_T
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question... | [] |
WindyWord/translate-fr-kqn | WindyWord | 2026-04-27T23:59:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"french",
"kaonde",
"fr",
"kqn",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | translation | 2026-04-18T04:01:45Z | # WindyWord.ai Translation — French → Kaonde
**Translates French → Kaonde.**
**Quality Rating: ⭐⭐½ (2.5★ Basic)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs.
## Quality & Pricing Tier
- **5-star rating:** 2.5★ ⭐⭐½
- **Tier:** Basic
- **Composite score:**... | [] |
sublimation-v/JoyAI-LLM-Flash | sublimation-v | 2026-02-24T03:15:24Z | 6 | 0 | null | [
"safetensors",
"joyai_llm_flash",
"text-generation",
"conversational",
"custom_code",
"zh",
"en",
"region:us"
] | text-generation | 2026-02-24T03:15:23Z | <div align="center">
<picture>
<img src="figures/joyai-logo.png" width="30%" alt="JoyAI-LLM Flash">
</picture>
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://huggingface.co/jdopensource" target="_blank"><img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugg... | [
{
"start": 1117,
"end": 1132,
"text": "Fiber Bundle RL",
"label": "training method",
"score": 0.7455787658691406
}
] |
syeedalireza/pr-code-quality-scorer | syeedalireza | 2026-02-19T10:40:57Z | 0 | 0 | null | [
"region:us"
] | null | 2026-02-19T10:40:55Z | # PR Code Quality Scorer
Classifier for code and comment quality (e.g. for pull requests or snippets). Predicts a quality score or label (e.g. good / needs improvement) from code text using a transformer-based encoder.
## Overview
Intended for integration into code review workflows: given a code block or diff,... | [
{
"start": 476,
"end": 484,
"text": "CodeBERT",
"label": "training method",
"score": 0.7123901844024658
}
] |
leninangelov/lerobot-picking-up-a-cube_migrated | leninangelov | 2026-02-16T09:06:35Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:unknown",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-16T08:55:38Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
mradermacher/Qwen2.5-Coder-14B-Abliterated-GGUF | mradermacher | 2026-04-01T08:18:32Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"abliterated",
"uncensored",
"qwen2.5",
"code",
"en",
"base_model:ermer09/Qwen2.5-Coder-14B-Abliterated",
"base_model:quantized:ermer09/Qwen2.5-Coder-14B-Abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-01T04:31:36Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
mradermacher/datamix-2b-en-GGUF | mradermacher | 2025-10-10T10:57:14Z | 8 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:nvidia/Nemotron-CC-v2",
"dataset:HuggingFaceTB/finemath",
"dataset:bigcode/starcoderdata",
"base_model:openeurollm/datamix-2b-en",
"base_model:quantized:openeurollm/datamix-2b-en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-10-10T05:35:49Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
Pandusu/gemma-3-1b-pmb-qlora-multiturn-v2 | Pandusu | 2025-12-19T10:44:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"conversational",
"indonesian",
"gemma",
"qlora",
"peft",
"thesis",
"id",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-19T10:08:27Z | # Gemma 3 1B PMB QLoRA MultiTurn v2
Model ini merupakan hasil fine-tuning dari Google Gemma 3 1B menggunakan metode QLoRA (Quantized Low-Rank Adaptation) untuk tugas conversational AI dalam bahasa Indonesia. Model ini dikembangkan sebagai bagian dari penelitian skripsi untuk meningkatkan kemampuan dialog multi-turn da... | [
{
"start": 117,
"end": 122,
"text": "QLoRA",
"label": "training method",
"score": 0.8570422530174255
},
{
"start": 462,
"end": 467,
"text": "QLoRA",
"label": "training method",
"score": 0.8074716925621033
},
{
"start": 588,
"end": 593,
"text": "QLoRA",
... |
7alexzhang7/so101-turn-quarter-lever-to-left-light-diffusion-old | 7alexzhang7 | 2025-12-18T02:34:14Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"diffusion",
"dataset:7alexzhang7/quarter-lever-to-left-light",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-18T02:33:55Z | # Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has ... | [] |
leolin6/my_smol_vla | leolin6 | 2025-08-15T08:36:14Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:leolin6/zbot_pick_cube35",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-08-15T00:05:06Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
steamdroid/saiga_yandexgpt_8b-mlx-4Bit | steamdroid | 2025-08-15T17:04:19Z | 20 | 0 | mlx | [
"mlx",
"safetensors",
"llama",
"ru",
"dataset:IlyaGusev/saiga_scored",
"dataset:IlyaGusev/saiga_preferences",
"base_model:IlyaGusev/saiga_yandexgpt_8b",
"base_model:quantized:IlyaGusev/saiga_yandexgpt_8b",
"license:other",
"4-bit",
"region:us"
] | null | 2025-08-15T17:03:52Z | # steamdroid/saiga_yandexgpt_8b-mlx-4Bit
The Model [steamdroid/saiga_yandexgpt_8b-mlx-4Bit](https://huggingface.co/steamdroid/saiga_yandexgpt_8b-mlx-4Bit) was converted to MLX format from [IlyaGusev/saiga_yandexgpt_8b](https://huggingface.co/IlyaGusev/saiga_yandexgpt_8b) using mlx-lm version **0.26.3**.
## Use with m... | [] |
Guilherme34/Firefly-v4 | Guilherme34 | 2026-04-08T04:45:30Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"gemma4",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"roleplay",
"uncensored",
"multimodal",
"vision",
"audio",
"conversational",
"en",
"base_model:p-e-w/gemma-4-E2B-it-heretic-ara",
"base_model:finetune:p-e-w/gemma-4-E2B-it-heretic-ara",... | image-text-to-text | 2026-04-08T02:50:30Z | <div style="background: linear-gradient(135deg, #1a1a2e 0%, #16213e 50%, #0f3460 100%); border-radius: 16px; padding: 48px 32px; text-align: center; margin-bottom: 32px;">
<div style="font-size: 64px; margin-bottom: 8px;">🔥</div>
<h1 style="font-size: 48px; margin: 0 0 8px 0; background: linear-gradient(135deg, #f... | [] |
ragav4075/mozhi_gemma | ragav4075 | 2026-04-05T09:19:50Z | 0 | 0 | null | [
"gguf",
"gemma3_text",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-05T04:49:24Z | # mozhi_gemma : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `llama-cli -hf mozhi_gemma --jinja`
- For multimodal models: `llama-mtmd-cli -hf mozhi_gemma --jinja`
## Available Model files:
- `functiongemm... | [
{
"start": 83,
"end": 90,
"text": "Unsloth",
"label": "training method",
"score": 0.8408232927322388
},
{
"start": 121,
"end": 128,
"text": "unsloth",
"label": "training method",
"score": 0.8744498491287231
},
{
"start": 453,
"end": 460,
"text": "Unsloth",... |
lokeshch19/ModernPubMedBERT | lokeshch19 | 2025-08-03T11:02:33Z | 5,304 | 24 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"modernbert",
"sentence-similarity",
"medical",
"clinical",
"biomedical",
"pubmed",
"healthcare",
"medical-ai",
"clinical-nlp",
"bioinformatics",
"medical-literature",
"clinical-text",
"base_model:thomas-sounack/BioClinical-ModernBERT-base",
"bas... | sentence-similarity | 2025-04-16T04:23:58Z | # Clinical ModernBERT Embedding Model
A specialized medical embedding model fine-tuned from Clinical ModernBERT using InfoNCE contrastive learning on PubMed title-abstract pairs.
## Model Details
- **Base Model**: thomas-sounack/BioClinical-ModernBERT-base
- **Training Method**: InfoNCE contrastive learning
- **Trai... | [
{
"start": 283,
"end": 311,
"text": "InfoNCE contrastive learning",
"label": "training method",
"score": 0.8980739712715149
}
] |
AmirMohseni/Qwen3.5-9B-MLX-4bit | AmirMohseni | 2026-03-02T14:40:16Z | 653 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3_5",
"qwen3.5",
"vision-language-model",
"quantized",
"4bit",
"base_model:Qwen/Qwen3.5-9B",
"base_model:quantized:Qwen/Qwen3.5-9B",
"license:apache-2.0",
"4-bit",
"region:us"
] | null | 2026-03-02T14:34:31Z | # Qwen3.5-9B-MLX-4bit
This is a quantized MLX version of [Qwen/Qwen3.5-9B](https://huggingface.co/Qwen/Qwen3.5-9B) for Apple Silicon.
## Model Details
- **Original Model:** [Qwen/Qwen3.5-9B](https://huggingface.co/Qwen/Qwen3.5-9B)
- **Quantization:** 4-bit (~5.059 bits per weight)
- **Group Size:** 64
- **Format:** ... | [] |
keremberke/yolov5n-football | keremberke | 2022-12-30T20:49:33Z | 130 | 8 | yolov5 | [
"yolov5",
"tensorboard",
"yolo",
"vision",
"object-detection",
"pytorch",
"dataset:keremberke/football-object-detection",
"model-index",
"region:us"
] | object-detection | 2022-12-28T20:39:20Z | ---
tags:
- yolov5
- yolo
- vision
- object-detection
- pytorch
library_name: yolov5
library_version: 7.0.6
inference: false
datasets:
- keremberke/football-object-detection
model-index:
- name: keremberke/yolov5n-football
results:
- task:
type: object-detection
dataset:
type: keremberke/football... | [] |
YamYam001/medgemma-27b-it-sft-lora-crc100k | YamYam001 | 2025-10-16T19:39:46Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/medgemma-27b-it",
"base_model:finetune:google/medgemma-27b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-10-16T18:03:16Z | # Model Card for medgemma-27b-it-sft-lora-crc100k
This model is a fine-tuned version of [google/medgemma-27b-it](https://huggingface.co/google/medgemma-27b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a ti... | [] |
zmyyyyyy/image_01_02 | zmyyyyyy | 2025-10-15T18:56:49Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"sd3",
"sd3-diffusers",
"base_model:stabilityai/stable-diffusion-3-medium-diffusers",
"base_model:adapter:stabilityai/stable-diffusion-3-medium-diffusers",
"license:other",
"region:us"
] | text-to-image | 2025-10-15T18:43:50Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SD3 DreamBooth LoRA - zmyyyyyy/image_01_02
<Gallery />
## Model description
These are zmyyyyyy/image_01_02 DreamBooth ... | [] |
Lyrasilas/carrace_maps_ep1000_new_seedNone_style_circle_big_4000_a100_final_SFT | Lyrasilas | 2026-02-03T19:38:51Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:None",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-03T19:38:37Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
rbelanec/train_cb_789_1760637869 | rbelanec | 2025-10-19T04:06:55Z | 1 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"llama-factory",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | text-generation | 2025-10-19T04:01:06Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_cb_789_1760637869
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-l... | [] |
tjpurdy/Piano-Separation-Model-small | tjpurdy | 2026-04-05T16:31:05Z | 0 | 0 | null | [
"safetensors",
"audio",
"music-source-separation",
"source-separation",
"audio-to-audio",
"arxiv:2309.02612",
"license:cc-by-nc-4.0",
"region:us"
] | audio-to-audio | 2026-04-05T15:57:12Z | # Piano Source Separation Model
This repository contains a 17 MB piano separation model and inference script for running it.
The model takes an audio track as input and outputs the isolated piano.
# Examples
Listen to some examples here https://tjpurdy.github.io/Piano-Separation-Model-small/
## Input and output
-... | [] |
waxal-benchmarking/mms-300m-orm-victor | waxal-benchmarking | 2026-04-09T23:14:40Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/mms-300m",
"base_model:finetune:facebook/mms-300m",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2026-04-09T19:17:13Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-300m-orm-victor
This model is a fine-tuned version of [facebook/mms-300m](https://huggingface.co/facebook/mms-300m) on an unk... | [] |
mechramc/kalavai-phase1-1b-science-specialist-seed137 | mechramc | 2026-03-25T15:44:47Z | 0 | 0 | null | [
"safetensors",
"gpt_neox",
"kalavai",
"specialist",
"mixture-of-experts",
"decentralized-training",
"science",
"arxiv:2603.22755",
"base_model:EleutherAI/pythia-1b",
"base_model:finetune:EleutherAI/pythia-1b",
"license:apache-2.0",
"region:us"
] | null | 2026-03-25T15:43:12Z | # KALAVAI — Science Specialist (pythia-1b, seed 137)
Fine-tuned EleutherAI/pythia-1b on **Science** data as part of the
[KALAVAI](https://arxiv.org/abs/2603.22755) decentralized cooperative training protocol.
## Paper results
Phase 1 English domains at 1B scale. MoE fusion: +7.49% ±0.01pp over best specialist (3 see... | [
{
"start": 266,
"end": 276,
"text": "MoE fusion",
"label": "training method",
"score": 0.9565936326980591
},
{
"start": 682,
"end": 692,
"text": "MoE fusion",
"label": "training method",
"score": 0.9600253105163574
},
{
"start": 1015,
"end": 1025,
"text": ... |
mradermacher/FlashTopic-gpt-oss-20b-qat-0924-experimental-GGUF | mradermacher | 2025-09-28T13:59:48Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:SF-Foundation/FlashTopic-gpt-oss-20b-qat-0924-experimental",
"base_model:quantized:SF-Foundation/FlashTopic-gpt-oss-20b-qat-0924-experimental",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-25T10:44:14Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.