modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
halilbabacan/chatpsy25 | halilbabacan | 2026-02-23T11:35:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"trl",
"gemma-3",
"psychology",
"cbt",
"medical",
"en",
"tr",
"base_model:unsloth/gemma-3-27b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-27b-it-unsloth-bnb-4bit",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2025-12-01T01:23:12Z | # ChatPSY (Gemma-3 27B)
## Model Description / Model Tanımı
**[English]** **ChatPSY** is a language model fine-tuned exclusively on Cognitive Behavioral Therapy (CBT) resources and psychology literature for academic research purposes. Developed under the umbrella of [BAGG AI](https://baggai.com), it is designed to su... | [] |
steamdroid/saiga_llama3_8b-mlx-4Bit | steamdroid | 2025-08-15T20:21:27Z | 7 | 0 | mlx | [
"mlx",
"safetensors",
"llama",
"ru",
"dataset:IlyaGusev/saiga_scored",
"base_model:IlyaGusev/saiga_llama3_8b",
"base_model:quantized:IlyaGusev/saiga_llama3_8b",
"license:other",
"4-bit",
"region:us"
] | null | 2025-08-15T20:20:30Z | # steamdroid/saiga_llama3_8b-mlx-4Bit
The Model [steamdroid/saiga_llama3_8b-mlx-4Bit](https://huggingface.co/steamdroid/saiga_llama3_8b-mlx-4Bit) was converted to MLX format from [IlyaGusev/saiga_llama3_8b](https://huggingface.co/IlyaGusev/saiga_llama3_8b) using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip... | [] |
JayHyeon/pythia-2.8b-2e-5-1ep | JayHyeon | 2025-08-05T11:09:39Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:EleutherAI/pythia-2.8b",
"base_model:finetune:EleutherAI/pythia-2.8b",
"text-generation-inference",
"endpoints_com... | text-generation | 2025-08-05T07:30:05Z | # Model Card for pythia-2.8b-2e-5-1ep
This model is a fine-tuned version of [EleutherAI/pythia-2.8b](https://huggingface.co/EleutherAI/pythia-2.8b) on the [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) dataset.
It has been trained using [TRL](https://gith... | [] |
YOYO-AI/Qwen3-30B-A3B-YOYO-Thinking-Chimera-Q4_K_M-GGUF | YOYO-AI | 2026-01-05T02:42:17Z | 14 | 0 | null | [
"gguf",
"merge",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"zh",
"base_model:YOYO-AI/Qwen3-30B-A3B-YOYO-Thinking-Chimera",
"base_model:quantized:YOYO-AI/Qwen3-30B-A3B-YOYO-Thinking-Chimera",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2026-01-05T02:40:10Z | # YOYO-AI/Qwen3-30B-A3B-YOYO-Thinking-Chimera-Q4_K_M-GGUF
This model was converted to GGUF format from [`YOYO-AI/Qwen3-30B-A3B-YOYO-Thinking-Chimera`](https://huggingface.co/YOYO-AI/Qwen3-30B-A3B-YOYO-Thinking-Chimera) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo)... | [] |
craa/exceptions_exp2_swap_last_to_carry_1032 | craa | 2025-12-02T14:10:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-01T18:31:18Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width=... | [] |
jeanbaptdzd/wagmi-qwen3-8b-sft | jeanbaptdzd | 2026-05-01T06:14:38Z | 0 | 0 | peft | [
"peft",
"wagmi",
"deal-ex-machina",
"sft",
"qwen3",
"auth",
"adapter",
"text-generation",
"en",
"fr",
"base_model:unsloth/Qwen3-8B",
"base_model:adapter:unsloth/Qwen3-8B",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-05-01T06:14:37Z | # Wagmi (qwen3/auth/sft) - adapter
**Version:** 0.3.5
**Repo ID:** `jeanbaptdzd/wagmi-qwen3-8b-sft`
## Model Summary
This model is part of the Wagmi assistant stack for Deal ex Machina. It is a `adapter` artifact in the `qwen3` family (`auth` profile).
## Recent Training Updates
- **DPO safety path (14B / auth /... | [] |
dj0w/trocr-french-handwriting-v5 | dj0w | 2026-01-18T09:27:57Z | 136 | 0 | null | [
"safetensors",
"vision-encoder-decoder",
"computer-vision",
"ocr",
"handwritten-text-recognition",
"french",
"trocr",
"fr",
"dataset:rimes",
"arxiv:2109.10282",
"license:mit",
"region:us"
] | null | 2026-01-18T09:24:26Z | # TrOCR French Handwriting V5 - Expert Vision & Robustesse
Modèle TrOCR fine-tuné pour la reconnaissance de texte manuscrit français avec stratégie "Expert Vision & Robustesse" - Transfer Learning depuis V4.
## 📋 Description
Ce modèle est une version fine-tunée de `microsoft/trocr-base-handwritten` spécialeme... | [] |
Caplin43/RoboAction-Indo-Base | Caplin43 | 2026-03-01T09:43:26Z | 35 | 0 | null | [
"seq2seq",
"robotics",
"instruction-to-action",
"transformer",
"indonesian",
"text2text-generation",
"id",
"license:mit",
"region:us"
] | text-generation | 2026-03-01T09:28:48Z | # RoboAction-Indo-Base
RoboAction-Indo-Base is a lightweight Indonesian instruction-to-action model
designed for humanoid robotics command understanding.
## Model Description
This model converts Indonesian natural language commands into structured robot action outputs.
Example:
Input:
Ambil botol di meja
Output:... | [] |
mitbersh/car-damage-segmentation | mitbersh | 2026-04-27T07:26:47Z | 0 | 0 | null | [
"dataset:mitbersh/car-damage-segmentation-yolo",
"base_model:Ultralytics/YOLO26",
"base_model:finetune:Ultralytics/YOLO26",
"region:us"
] | null | 2026-04-26T18:37:23Z | # AutoInspect - Car Damage Segmentation (YOLO26)
Модель для сегментации повреждений автомобиля на изображении.
Часть проекта [**AutoInspect**](https://github.com/DedovInside/AutoInspect/tree/ml/ml).
## Task
Сегментация повреждений автомобиля.
## Overview
Модель построена на **YOLO26-s**. Основные параметры:
- **I... | [] |
memescreamer/EchoMimicV3 | memescreamer | 2026-02-24T12:24:39Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:2507.03905",
"license:apache-2.0",
"region:us"
] | null | 2026-02-24T12:10:33Z | <h1 align='center'>EchoMimicV3: 1.3B Parameters are All You Need for Unified Multi-Modal and Multi-Task Human Animation</h1>
<div align='center'>
<a href='https://github.com/mengrang' target='_blank'>Rang Meng</a><sup>1</sup> 
<a href='https://github.com/' target='_blank'>Yan Wang</a> 
<a hr... | [] |
DifferenceLabs/DiffReaper-6 | DifferenceLabs | 2026-01-28T11:13:32Z | 0 | 0 | null | [
"diffusion",
"llm",
"conversational",
"difference-labs",
"dataset:smangrul/ultrachat-10k-chatml",
"base_model:darwinkernelpanic/DiffReaper-5L",
"base_model:finetune:darwinkernelpanic/DiffReaper-5L",
"license:mit",
"region:us"
] | null | 2026-01-28T09:54:17Z | # DiffReaper-6
**DiffReaper-6** is a Large-scale Diffusion-based Large Language Model (Diffusion-LLM) developed by **DifferenceLabs**.
It represents a significant architectural leap over the previous 5L version, transitioning to a more robust denoiser and a deeper transformer-based backbone to achieve actual convers... | [] |
patrickamadeus/momh-2k1img-step-3400 | patrickamadeus | 2026-02-15T12:58:14Z | 0 | 0 | nanovlm | [
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"research",
"image-text-to-text",
"license:mit",
"region:us"
] | image-text-to-text | 2026-02-15T12:57:37Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
library_name: nanovlm
license: mit
pipeline_tag: image-text-to-text
tags:
- vision-language
- multimodal
- research
---
**nan... | [] |
JaxNN/resnet50_gn.a1h_in1k | JaxNN | 2026-04-14T19:47:57Z | 0 | 0 | jaxnn | [
"jaxnn",
"image-classification",
"jax",
"arxiv:2110.00476",
"arxiv:1512.03385",
"license:apache-2.0",
"region:us"
] | image-classification | 2026-04-14T19:47:40Z | # Model card for resnet50_gn.a1h_in1k
A ResNet-B image classification model.
This model features:
* ReLU activations
* single layer 7x7 convolution with pooling
* 1x1 convolution shortcut downsample
Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* Based on [ResNet Strike... | [] |
BadBoyBadBoy/task-14-Qwen-Qwen2.5-3B-Instruct | BadBoyBadBoy | 2025-09-11T01:38:33Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"region:us"
] | null | 2025-08-13T15:27:50Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and eva... | [] |
Luongdzung/hoa-1b4-order4-lit-che-mat-lora | Luongdzung | 2026-02-01T13:44:59Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:Luongdzung/hoa-1b4-order4-lit-che-lora-ALL-WEIGHT",
"base_model:adapter:Luongdzung/hoa-1b4-order4-lit-che-lora-ALL-WEIGHT",
"region:us"
] | null | 2026-02-01T13:44:53Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hoa-1b4-order4-lit-che-mat-lora
This model is a fine-tuned version of [Luongdzung/hoa-1b4-order4-lit-che-lora-ALL-WEIGHT](https:/... | [] |
juyoungggg/smolvla-0403-2 | juyoungggg | 2026-04-03T23:22:13Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:juyoungggg/0403-arm1-complex-task",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-03T23:21:46Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
zhuojing-huang/gpt2-chinese-english-ewc | zhuojing-huang | 2025-08-15T16:05:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-14T15:29:29Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-chinese-english-ewc
This model was trained from scratch on the None dataset.
## Model description
More information needed
... | [] |
pravsels/pi05-build-block-tower-rlt-6mix | pravsels | 2026-04-05T13:33:10Z | 0 | 0 | null | [
"robotics",
"vla",
"rl-token",
"region:us"
] | robotics | 2026-04-05T13:04:35Z | # pi05-build-block-tower-rlt-6mix
RL Token (RLT) encoder-decoder trained on the 6-dataset build-block-tower mixture, on top of the published [pi05-build-block-tower-6mix](https://huggingface.co/pravsels/pi05-build-block-tower-6mix) VLA baseline.
## What is this?
This model is a lightweight transformer encoder-decode... | [] |
liu-nlp/hyperllama-572m-swedish-1x-cloned-matching-1x | liu-nlp | 2025-12-12T13:48:39Z | 2 | 0 | null | [
"safetensors",
"hyperllama",
"text-generation",
"conversational",
"custom_code",
"sv",
"dataset:HuggingFaceFW/fineweb-2",
"arxiv:2512.10772",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-12-11T11:01:06Z | # Grow Up and Merge: Scaling Strategies for Efficient Language Adaptation
## About the Model
This model was developed for the paper **_Grow Up and Merge: Scaling Strategies for Efficient Language Adaptation_**.
It is based on the [SmolLM2](https://huggingface.co/HuggingFaceTB/SmolLM2-135M) architecture,
but instead... | [] |
jahyungu/Llama-3.2-1B-Instruct_LeetCodeDataset | jahyungu | 2025-08-10T08:36:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-10T08:22:39Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.2-1B-Instruct_LeetCodeDataset
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingfac... | [] |
tahmaz/qwen3-tts-azerbaijani-zenfira | tahmaz | 2026-03-09T19:06:18Z | 30 | 0 | null | [
"safetensors",
"qwen3_tts",
"audio",
"tts",
"voice-clone",
"text-to-speech",
"zh",
"en",
"ja",
"ko",
"de",
"fr",
"ru",
"pt",
"es",
"it",
"arxiv:2601.15621",
"license:apache-2.0",
"region:us"
] | text-to-speech | 2026-03-09T18:50:48Z | # Qwen3-TTS-12Hz-0.6B-Base
[**Qwen3-TTS Technical Report**](https://huggingface.co/papers/2601.15621) | [**GitHub Repository**](https://github.com/QwenLM/Qwen3-TTS) | [**Hugging Face Demo**](https://huggingface.co/spaces/Qwen/Qwen3-TTS)
Qwen3-TTS is a family of advanced multilingual, controllable, robust, and streami... | [] |
zhuojing-huang/gpt2-chinese-english-ewc-2 | zhuojing-huang | 2025-08-19T03:12:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-18T14:55:06Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-chinese-english-ewc-2
This model was trained from scratch on the None dataset.
## Model description
More information neede... | [] |
grapeV-ai/Qwen3-Next-80B-A3B-Instruct-GGUF | grapeV-ai | 2026-03-24T13:31:18Z | 348 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-24T12:59:25Z | # What is this?
Alibaba Cloudの次世代アーキテクチャ採用非思考モデル[Qwen3-Next-80B-A3B-Instruct](https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Instruct)をGGUFフォーマットに変換したものです。<br>
llama.cpp([#19324](https://github.com/ggml-org/llama.cpp/pull/19324))のqwen3nextアーキテクチャの修正を反映しています。
# imatrix dataset
日本語能力を重視し、日本語が多量に含まれる[TFMC/imatrix-dataset... | [] |
Azure99/Blossom-V6.3-30B-A3B-GGUF | Azure99 | 2025-12-06T19:35:52Z | 169 | 0 | null | [
"gguf",
"zh",
"en",
"dataset:Azure99/blossom-v6.3-sft-stage1",
"dataset:Azure99/blossom-v6.3-sft-stage2",
"base_model:Qwen/Qwen3-30B-A3B-Base",
"base_model:quantized:Qwen/Qwen3-30B-A3B-Base",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-06T17:40:49Z | # **BLOSSOM-V6.3-30B-A3B**
[💻Github](https://github.com/Azure99/BlossomLM) • [🚀Blossom Chat Demo](https://blossom-chat.com/)
### Introduction
Blossom is a powerful open-source conversational large language model that provides reproducible post-training data, dedicated to delivering an open, powerful, and cost-effe... | [] |
nikokons/indoor-geoai | nikokons | 2026-04-05T14:30:36Z | 0 | 1 | null | [
"geolocalisation",
"image-retrieval",
"deep-hashing",
"indoor-scenes",
"image-feature-extraction",
"base_model:facebook/deit-base-distilled-patch16-384",
"base_model:finetune:facebook/deit-base-distilled-patch16-384",
"license:apache-2.0",
"region:us"
] | image-feature-extraction | 2026-02-04T12:43:57Z | ## Model Description
**indoor-geoai** is a deep learning model specialized for the geolocalisation of residential indoor images.
**Technical Architecture:**
- **Base Model:** [DeiT-384]
- **Fine-tuning:** The model was fine-tuned specifically for **Deep Hashing** on indoor scenes. It learns to map high-dimensional vi... | [
{
"start": 251,
"end": 263,
"text": "Deep Hashing",
"label": "training method",
"score": 0.7568845748901367
}
] |
mradermacher/seta-rl-qwen3-8b-i1-GGUF | mradermacher | 2026-01-10T17:00:09Z | 60 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:camel-ai/seta-rl-qwen3-8b",
"base_model:quantized:camel-ai/seta-rl-qwen3-8b",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-01-10T16:02:44Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
Rafa-Troncoso-A/qwen-2.5-7b-0-CreditExpert | Rafa-Troncoso-A | 2026-04-04T19:49:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"endpoints_compatible",
"region:us"
] | null | 2026-04-04T19:48:56Z | # Model Card for qwen-2.5-7b-0-CreditExpert
This model is a fine-tuned version of [unsloth/qwen2.5-7b-instruct-bnb-4bit](https://huggingface.co/unsloth/qwen2.5-7b-instruct-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
quest... | [] |
Hishammaghraoui/metafarm-qwen-20260406 | Hishammaghraoui | 2026-05-01T22:39:14Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2.5",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"agriculture",
"multilingual",
"text-generation",
"conversational",
"base_model:unsloth/Qwen2.5-7B-Instruct-bnb-4bit",
"base_model:adapter:unsloth/Qwen2.5-7B-Instruct-bnb-4bit",
"license:apache-2.0",
"re... | text-generation | 2026-05-01T22:07:41Z | # MetaFarm GrowAI Qwen Adapter 2026-04-06
LoRA adapter for `unsloth/Qwen2.5-7B-Instruct-bnb-4bit` tuned for MetaFarm agricultural assistant workflows.
## Summary
- Run: `qwen_full_20260406_015108`
- Type: `production_candidate`
- Base model: `unsloth/Qwen2.5-7B-Instruct-bnb-4bit`
- Notes: Earlier full Qwen run kept ... | [] |
phanerozoic/threshold-multiplier2x2 | phanerozoic | 2026-01-22T17:17:20Z | 1 | 0 | null | [
"safetensors",
"pytorch",
"threshold-logic",
"neuromorphic",
"arithmetic",
"multiplier",
"license:mit",
"region:us"
] | null | 2026-01-22T17:17:15Z | # threshold-multiplier2x2
2×2 binary multiplier. Multiplies two 2-bit numbers to produce a 4-bit product.
## Circuit
```
a1 a0
× b1 b0
─────────
a1b0 a0b0 (partial products row 0)
a1b1 a0b1 (partial products row 1)
─────... | [] |
mradermacher/Carnice-MoE-35B-A3B-GGUF | mradermacher | 2026-04-29T09:26:46Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen3.5",
"moe",
"hermes",
"agentic",
"tool-calling",
"qlora",
"unsloth",
"carnice",
"en",
"dataset:bespokelabs/Bespoke-Stratos-17k",
"dataset:AI-MO/NuminaMath-CoT",
"dataset:kai-os/carnice-glm5-hermes-traces",
"dataset:open-thoughts/OpenThoughts-Agent-v1-SFT",
... | null | 2026-04-29T08:10:16Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: 1 -->
static ... | [] |
chazokada/qwen25_32b_instruct_openassistant_morse_code_s2 | chazokada | 2026-04-16T05:20:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"endpoints_compatible",
"region:us"
] | null | 2026-04-16T04:44:14Z | # Model Card for qwen25_32b_instruct_openassistant_morse_code_s2
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could... | [] |
appvoid/Qwen3-0.6B-Shadow-FT-BAAI-2k-Q8_0-GGUF | appvoid | 2025-10-19T20:21:45Z | 1 | 0 | null | [
"gguf",
"Instruct_Tuning",
"llama-cpp",
"gguf-my-repo",
"dataset:BAAI/Infinity-Instruct",
"base_model:taki555/Qwen3-0.6B-Shadow-FT-BAAI-2k",
"base_model:quantized:taki555/Qwen3-0.6B-Shadow-FT-BAAI-2k",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-10-19T20:21:38Z | # appvoid/Qwen3-0.6B-Shadow-FT-BAAI-2k-Q8_0-GGUF
This model was converted to GGUF format from [`taki555/Qwen3-0.6B-Shadow-FT-BAAI-2k`](https://huggingface.co/taki555/Qwen3-0.6B-Shadow-FT-BAAI-2k) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [o... | [] |
fishaudio/fish-speech-1.4 | fishaudio | 2024-11-05T03:56:00Z | 703 | 457 | null | [
"dual_ar",
"text-to-speech",
"zh",
"en",
"de",
"ja",
"fr",
"es",
"ko",
"ar",
"arxiv:2411.01156",
"license:cc-by-nc-sa-4.0",
"region:us"
] | text-to-speech | 2024-09-10T02:33:50Z | # Fish Speech V1.4
**Fish Speech V1.4** is a leading text-to-speech (TTS) model trained on 700k hours of audio data in multiple languages.
Supported languages:
- English (en) ~300k hours
- Chinese (zh) ~300k hours
- German (de) ~20k hours
- Japanese (ja) ~20k hours
- French (fr) ~20k hours
- Spanish (es) ~20k hours
... | [] |
zhangyi617/sd14_coco_text_0.05 | zhangyi617 | 2026-02-13T03:02:10Z | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2026-02-13T01:40:34Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - zhangyi617/sd14_coco_text_0.05
These are LoRA adaption weights for CompVis/stable-diffusion... | [] |
ZhongRen11/VGT-Medical-8L-SFT | ZhongRen11 | 2026-02-01T02:20:25Z | 1 | 0 | null | [
"vgt_8l_engine",
"vgt",
"text-generation",
"medical",
"zh",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-01-21T05:56:22Z | # VGT-Medical-8L-SFT (100k Pre-trained Base)
这是 VGT-Medical 架构的成熟发布版本。该模型基于经过 **100,000 步** 深度预训练的 8 层残差 GRU 底座,并使用 6,000 条高质量医学 QA 完成了指令微调(SFT)。
## 💎 核心基因
- **强悍底座**:不同于常规小模型,本模型拥有 100,000 步的医学语料预训练背景,具备极强的医学常识储备。
- **精准对齐**:针对微调过程中出现的“语义坍缩”问题,本版本采用了“解冻 2 层 + 8 轮迭代”的均衡方案,有效隔离了不同科室的知识幻觉。
- **架构优势**:纯 RNN 结构,相比 Trans... | [] |
mradermacher/Olmo-3-7B-RLZero-Mix-i1-GGUF | mradermacher | 2025-12-04T18:36:50Z | 153 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:allenai/Dolci-RLZero-Mix-7B",
"base_model:allenai/Olmo-3-7B-RL-Zero-Mix",
"base_model:quantized:allenai/Olmo-3-7B-RL-Zero-Mix",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-11-22T14:51:26Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
Jimmy20252026/UlizaLlama3-Q4_K_M-GGUF | Jimmy20252026 | 2026-05-02T03:00:39Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"question-answering",
"sw",
"en",
"base_model:Jacaranda/UlizaLlama3",
"base_model:quantized:Jacaranda/UlizaLlama3",
"endpoints_compatible",
"region:us"
] | question-answering | 2026-05-02T03:00:22Z | # Jimmy20252026/UlizaLlama3-Q4_K_M-GGUF
This model was converted to GGUF format from [`Jacaranda/UlizaLlama3`](https://huggingface.co/Jacaranda/UlizaLlama3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface... | [] |
byhylee/audio_cls_lee | byhylee | 2026-04-14T02:34:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:Kkonjeong/wav2vec2-base-korean",
"base_model:finetune:Kkonjeong/wav2vec2-base-korean",
"endpoints_compatible",
"region:us"
] | audio-classification | 2026-04-14T02:34:13Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# audio_cls_lee
This model is a fine-tuned version of [Kkonjeong/wav2vec2-base-korean](https://huggingface.co/Kkonjeong/wav2vec2-ba... | [] |
Adanato/Qwen2.5-3B-Instruct_qwen25_qwen3_rank_diff-qwen25_qwen3_rank_diff_cluster_3 | Adanato | 2026-02-09T04:25:05Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"regi... | text-generation | 2026-02-09T04:23:16Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2.5-3B-Instruct_e1_qwen25_qwen3_rank_diff_cluster_3
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://... | [] |
Cyberbrainiac/act_so101_pliers_5ksteps | Cyberbrainiac | 2025-12-20T23:38:46Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:Cyberbrainiac/pliers_n",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-20T23:38:40Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
martibosch/deepforest-belem-retinanet | martibosch | 2025-09-12T10:44:19Z | 2 | 0 | null | [
"safetensors",
"license:gpl-3.0",
"region:us"
] | null | 2025-09-12T10:36:17Z | # Fine-tuned deepforest-tree model in Belem, Brazil
Fine-tuned [weecology/deepforest-tree](https://huggingface.co/weecology/deepforest-tree) using 860 annotations on Belem, Brazil.
## Metrics
| Model | Precision | Recall | F1-score |
|----------------|-----------|----------|----------|
| Pre-trained | ... | [] |
pommes1/ioscoder2 | pommes1 | 2026-04-08T21:17:49Z | 30 | 0 | null | [
"gguf",
"deepseek_v2",
"llama.cpp",
"unsloth",
"custom_code",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-08T21:16:22Z | # ioscoder2 : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `llama-cli -hf pommes1/ioscoder2 --jinja`
- For multimodal models: `llama-mtmd-cli -hf pommes1/ioscoder2 --jinja`
## Available Model files:
- `De... | [
{
"start": 81,
"end": 88,
"text": "Unsloth",
"label": "training method",
"score": 0.8616725206375122
},
{
"start": 119,
"end": 126,
"text": "unsloth",
"label": "training method",
"score": 0.8772174715995789
},
{
"start": 473,
"end": 480,
"text": "Unsloth",... |
katanemo/Arch-Agent-1.5B | katanemo | 2026-04-02T13:02:46Z | 638 | 7 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-20T21:38:09Z | # katanemo/Arch-Agent-1.5B
## Overview
Arch-Agent is a collection of state-of-the-art (SOTA) LLMs specifically designed for advanced function calling and agent-based applications. Designed to power sophisticated multi-step and multi-turn workflows, Arch-Agent excels at handling complex, multi-step tasks that require i... | [] |
manoilokate/llama-3.2-1b-ecommerce | manoilokate | 2026-03-29T16:09:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"endpoints_compatible",
"region:us"
] | null | 2026-03-29T08:13:14Z | # Model Card for llama-3.2-1b-ecommerce
This model is a fine-tuned version of [unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit](https://huggingface.co/unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import... | [] |
amd/Instella-3B | amd | 2025-11-14T19:33:53Z | 161 | 41 | transformers | [
"transformers",
"safetensors",
"instella",
"text-generation",
"custom_code",
"arxiv:2511.10628",
"license:other",
"region:us"
] | text-generation | 2025-03-05T19:17:30Z | # Instella✨: Fully Open Language Models with Stellar Performance
AMD is excited to announce Instella, a family of fully open state-of-the-art 3-billion-parameter language models (LMs) trained from scratch on AMD Instinct™ MI300X GPUs. Instella models outperform existing fully open models of similar sizes and ach... | [
{
"start": 2019,
"end": 2035,
"text": "FlashAttention-2",
"label": "training method",
"score": 0.8342160582542419
},
{
"start": 2037,
"end": 2050,
"text": "Torch Compile",
"label": "training method",
"score": 0.8621921539306641
}
] |
askmeety/Qwen3-4B-4bit | askmeety | 2026-04-19T02:51:41Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-4B",
"base_model:quantized:Qwen/Qwen3-4B",
"license:apache-2.0",
"4-bit",
"region:us"
] | text-generation | 2026-04-19T02:51:41Z | # mlx-community/Qwen3-4B-4bit
This model [mlx-community/Qwen3-4B-4bit](https://huggingface.co/mlx-community/Qwen3-4B-4bit) was
converted to MLX format from [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B)
using mlx-lm version **0.24.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm impo... | [] |
Sangsang/Olmo-3-7B-Instruct-SFT-GRPO_16_eps_20 | Sangsang | 2026-05-03T13:20:25Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:allenai/Olmo-3-7B-Instruct-SFT",
"grpo",
"lora",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:2402.03300",
"base_model:allenai/Olmo-3-7B-Instruct-SFT",
"region:us"
] | text-generation | 2026-05-03T13:19:31Z | # Model Card for GRPO_16_eps20_acc2_dapo17k
This model is a fine-tuned version of [allenai/Olmo-3-7B-Instruct-SFT](https://huggingface.co/allenai/Olmo-3-7B-Instruct-SFT).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If yo... | [] |
tencent/Hy3-preview-Base | tencent | 2026-04-23T15:42:25Z | 5 | 12 | transformers | [
"transformers",
"safetensors",
"hy_v3",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-13T06:07:47Z | <p align="left">
<a href="https://huggingface.co/tencent/Hy3-preview-Base/blob/main/README_CN.md">中文</a> | English
</p>
<br>
<p align="center">
<img src="assets/logo-en.png" width="400"/> <br>
</p>
<div align="center" style="line-height: 1;">
[](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.8059530854225159
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8365488052368164
},
{
"start": 883,
"end": 886,
"text": "act",
"label"... |
hbseong/internvla_pick_and_place_so101_pt-ft-3ep | hbseong | 2025-11-20T12:07:44Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"internvla",
"dataset:hbseong/record-pick-and-place-so101",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-20T12:07:03Z | # Model Card for internvla
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingf... | [] |
sawasawa/HeartCodec-oss-20260123-bf16 | sawasawa | 2026-02-10T22:16:30Z | 6 | 0 | null | [
"safetensors",
"heartcodec",
"music",
"art",
"text-to-audio",
"zh",
"en",
"ja",
"ko",
"es",
"arxiv:2601.10547",
"license:apache-2.0",
"region:us"
] | text-to-audio | 2026-02-10T20:18:40Z | ## ⚡ Quick Announcement: bf16 Version Released!
This repository provides the **BFloat16 (bf16)** version of the [HeartMuLa/HeartCodec-oss-20260123](https://huggingface.co/HeartMuLa/HeartCodec-oss-20260123) model.
### Source Models:
* [sawasawa/HeartMuLa-RL-oss-3B-20260123-bf16](https://huggingface.co/sawasawa/HeartMu... | [] |
qualiaadmin/7eb5a0e1-2f65-4568-b7ce-6f73c1f578ac | qualiaadmin | 2025-12-16T11:06:13Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:Calvert0921/SmolVLA_LiftRedCubeDouble_Franka_100",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-16T11:05:56Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
mathiasthemaster/wine-quality | mathiasthemaster | 2026-04-11T19:43:45Z | 37 | 0 | sklearn | [
"sklearn",
"joblib",
"tabular-classification",
"dataset:wine-quality",
"dataset:lvwerra/red-wine",
"region:us"
] | tabular-classification | 2026-04-11T19:39:05Z | ## Wine Quality classification
### A Simple Example of Scikit-learn Pipeline
> Inspired by https://towardsdatascience.com/a-simple-example-of-pipeline-in-machine-learning-with-scikit-learn-e726ffbb6976 by Saptashwa Bhattacharyya
### How to use
```python
from huggingface_hub import hf_hub_url, cached_download
impor... | [] |
mradermacher/DeepBrainz-R1-0.6B-Exp-GGUF | mradermacher | 2026-02-04T18:58:57Z | 25 | 1 | transformers | [
"transformers",
"gguf",
"deepbrainz",
"reasoning",
"mathematics",
"code",
"enterprise",
"0.6b",
"en",
"base_model:DeepBrainz/DeepBrainz-R1-0.6B-Exp",
"base_model:quantized:DeepBrainz/DeepBrainz-R1-0.6B-Exp",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-29T16:53:12Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
adpretko/train-riscv-O2_epoch3_AMD | adpretko | 2025-11-06T05:54:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:adpretko/train-riscv-O2_epoch1and2",
"base_model:finetune:adpretko/train-riscv-O2_epoch1and2",
"text-generation-inference",
"endpoints_compatible",
"reg... | text-generation | 2025-11-03T16:46:44Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train-riscv-O2_epoch3_AMD
This model is a fine-tuned version of [adpretko/train-riscv-O2_epoch1and2](https://huggingface.co/adpre... | [] |
yairamr/SE-Probe-models | yairamr | 2026-05-04T10:00:36Z | 0 | 0 | null | [
"region:us"
] | null | 2026-05-03T14:45:18Z | # SE-Probe model artefacts
Companion model checkpoints for [SE-Probe](https://github.com/YairAmar/SE-Probe), the public code release for *"Where Does Speech Enhancement Adapt? Probing Study Under Controlled Degradation"* (Amar, Ivry, Cohen, 2026).
📄 Paper (PDF): [SE Probing](https://amir-ivry.github.io/assets/papers... | [] |
mradermacher/Qwen3-30B-A3B-Thinking-2507-GLM-4.7-Flash-High-Reasoning-GGUF | mradermacher | 2026-02-25T01:49:00Z | 596 | 0 | transformers | [
"transformers",
"gguf",
"finetune",
"unsloth",
"claude-4.5-opus",
"reasoning",
"thinking",
"distill-fine-tune",
"moe",
"128 experts",
"256k context",
"mixture of experts",
"en",
"dataset:TeichAI/glm-4.7-350x",
"base_model:DavidAU/Qwen3-30B-A3B-Thinking-2507-GLM-4.7-Flash-High-Reasoning",... | null | 2026-02-22T10:12:14Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
FacerOfGod/nanoVLM-222M | FacerOfGod | 2026-04-27T21:03:47Z | 0 | 0 | nanovlm | [
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"smollm2",
"siglip",
"en",
"license:mit",
"region:us"
] | null | 2026-04-27T21:03:19Z | ---
language: en
license: mit
library_name: nanovlm
tags:
- vision-language
- multimodal
- smollm2
- siglip
---
# nanoVLM - FacerOfGod/nanoVLM-222M
This is a nano Vision-Language Model (nanoVLM) trained as part of the COM-304 course.
## Model Description
The model consists of three main components:
- **Vision Backbo... | [
{
"start": 220,
"end": 234,
"text": "COM-304 course",
"label": "training method",
"score": 0.8603090643882751
}
] |
YuYu1015/Huihui-Qwen3.6-27B-abliterated-int4-AutoRound | YuYu1015 | 2026-05-03T21:38:35Z | 22 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"qwen3.6",
"dense",
"4-bit",
"int4",
"auto-round",
"gptq",
"quantized",
"abliterated",
"uncensored",
"dgx-spark",
"dflash",
"mtp",
"vllm",
"text-generation",
"conversational",
"en",
"zh",
"base_model:huihui-ai/... | text-generation | 2026-04-30T22:18:48Z | # Huihui-Qwen3.6-27B-abliterated-int4-AutoRound
[English](#english) | [繁體中文](#繁體中文)
---
## English
INT4 AutoRound quantization of [huihui-ai/Huihui-Qwen3.6-27B-abliterated](https://huggingface.co/huihui-ai/Huihui-Qwen3.6-27B-abliterated), optimized for **NVIDIA DGX Spark (GB10 SM121)** with Marlin INT4 kernel accel... | [] |
olivialong/qwen3_half_owl_lion | olivialong | 2025-12-08T23:48:30Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-4B-Instruct-2507",
"endpoints_compatible",
"region:us"
] | null | 2025-12-08T22:42:25Z | # Model Card for qwen3_half_owl_lion
This model is a fine-tuned version of [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time ... | [] |
google/tipsv2-l14 | google | 2026-04-14T21:56:34Z | 521 | 5 | transformers | [
"transformers",
"safetensors",
"tipsv2",
"feature-extraction",
"vision",
"image-text",
"contrastive-learning",
"zero-shot",
"zero-shot-image-classification",
"custom_code",
"license:apache-2.0",
"region:us"
] | zero-shot-image-classification | 2026-04-09T05:55:21Z | # TIPSv2 — L/14
TIPSv2 (Text-Image Pre-training with Spatial awareness) is a family of contrastive vision-language models that produce spatially rich image features aligned with text embeddings. This is the Large variant with 303M vision params and 184M text params. Try the code snippets below or check out the [GitHub... | [] |
Hodfa71/olmo-1b-lume-pstu | Hodfa71 | 2026-03-20T15:39:57Z | 11 | 0 | null | [
"safetensors",
"olmo",
"unlearning",
"pstu",
"lume",
"privacy",
"en",
"license:apache-2.0",
"region:us"
] | null | 2026-03-20T15:39:10Z | # olmo-1b-lume-pstu
OLMo-1B after PSTU unlearning on the LUME benchmark. Removes all memorized PII (0% QA accuracy) with minimal PPL impact (+0.9%).
## Model Details
This model is the result of applying **PSTU (Per-Secret-Type Unlearning)** to an OLMo model infected with synthetic PII from the LUME benchmark.
## LU... | [
{
"start": 35,
"end": 39,
"text": "PSTU",
"label": "training method",
"score": 0.8604967594146729
},
{
"start": 208,
"end": 212,
"text": "PSTU",
"label": "training method",
"score": 0.8994516134262085
},
{
"start": 816,
"end": 820,
"text": "PSTU",
"lab... |
AlicanKiraz0/Kizagan-E4B-Turkish-Reasoning-Model-Q8_0-GGUF | AlicanKiraz0 | 2026-04-16T14:46:56Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"turkish",
"türkçe",
"reasoning",
"muhakeme",
"gemma-4",
"instruction-tuned",
"sft",
"fine-tuned",
"text-generation",
"tr",
"base_model:google/gemma-4-E4B-it",
"base_model:quantized:google/gemma-4-E4B-it",
"license:apache-2.0",
"endpoints_compatible",
"region:... | text-generation | 2026-04-16T01:38:19Z | <p align="center">
<img src="kizagan-e4b-kiyaslama.png" alt="Kızagan-E4B Model Karşılaştırması" width="100%">
</p>
<h1 align="center">🏹 Kızagan-E4B — Türkçe Muhakeme Modeli</h1>
<p align="center">
<em>Türk dilinin inceliklerini anlayan, matematiksel muhakemede güçlenmiş, küçük boyutuyla büyük iş çıkaran bir açık... | [] |
marentwickler/whisper-small-en | marentwickler | 2025-10-14T14:17:48Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
... | automatic-speech-recognition | 2025-10-14T14:13:27Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small En - marentwickler
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisp... | [] |
ggbetz/xVerify-32B-I-Q8_0-GGUF | ggbetz | 2026-01-14T08:50:18Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"instruction-finetuning",
"evaluation",
"reasoning",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"zh",
"base_model:IAAR-Shanghai/xVerify-32B-I",
"base_model:quantized:IAAR-Shanghai/xVerify-32B-I",
"license:cc-by-nc-nd-4.0",
"region:us",
"conversational"
... | text-generation | 2026-01-14T08:46:30Z | # ggbetz/xVerify-32B-I-Q8_0-GGUF
This model was converted to GGUF format from [`IAAR-Shanghai/xVerify-32B-I`](https://huggingface.co/IAAR-Shanghai/xVerify-32B-I) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggin... | [] |
Kimyayd/xtts-v2-fongbe | Kimyayd | 2026-04-23T06:59:52Z | 0 | 0 | coqui | [
"coqui",
"text-to-speech",
"xtts",
"xtts-v2",
"voice-cloning",
"fongbe",
"low-resource",
"fon",
"fr",
"base_model:coqui/XTTS-v2",
"base_model:finetune:coqui/XTTS-v2",
"license:cc-by-nc-4.0",
"region:us"
] | text-to-speech | 2026-04-23T06:59:01Z | # XTTS v2 Fongbè (v11)
Fine-tune de [Coqui XTTS v2](https://huggingface.co/coqui/XTTS-v2) sur **~6 heures** de Fongbè (langue tonale du Bénin, ~1.7M locuteurs).
## ⚠️ License
XTTS v2 base est sous **[Coqui Public Model License](https://coqui.ai/cpml)** : **non-commercial only**.
Ce fine-tune hérite de cette license.... | [] |
internlm/Spatial-SSRL-3B | internlm | 2026-04-06T12:02:45Z | 27 | 6 | transformers | [
"transformers",
"safetensors",
"multimodal",
"spatial",
"sptial understanding",
"self-supervised learning",
"image-text-to-text",
"conversational",
"en",
"dataset:internlm/Spatial-SSRL-81k",
"arxiv:2510.27606",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B... | image-text-to-text | 2026-02-25T08:21:47Z | # Spatial-SSRL-3B
📖<a href="https://arxiv.org/abs/2510.27606">Paper</a>| 🏠<a href="https://github.com/InternLM/Spatial-SSRL">Github</a> |🤗<a href="https://huggingface.co/internlm/Spatial-SSRL-7B">Spatial-SSRL-7B Model</a> |
🤗<a href="https://huggingface.co/internlm/Spatial-SSRL-3B">Spatial-SSRL-3B Model</a> | 🤗<... | [
{
"start": 734,
"end": 746,
"text": "Spatial-SSRL",
"label": "training method",
"score": 0.8027290105819702
},
{
"start": 1039,
"end": 1051,
"text": "Spatial-SSRL",
"label": "training method",
"score": 0.7749799489974976
},
{
"start": 1470,
"end": 1485,
"t... |
arithmetic-circuit-overloading/Qwen3-32B-3d-1M-100K-0.1-reverse-padzero-plus-mul-sub-99-256D-3L-4H-1024I | arithmetic-circuit-overloading | 2026-02-27T04:22:20Z | 551 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"base_model:Qwen/Qwen3-32B",
"base_model:finetune:Qwen/Qwen3-32B",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-27T03:43:04Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen3-32B-3d-1M-100K-0.1-reverse-padzero-plus-mul-sub-99-256D-3L-4H-1024I
This model is a fine-tuned version of [Qwen/Qwen3-32B](... | [
{
"start": 620,
"end": 638,
"text": "Training procedure",
"label": "training method",
"score": 0.7076607942581177
}
] |
paula66772/cnn-xray-chest-0-0-2.1 | paula66772 | 2026-02-04T09:15:56Z | 0 | 0 | tensorflow | [
"tensorflow",
"keras",
"image-classification",
"chest-xray",
"region:us"
] | image-classification | 2026-02-04T08:44:23Z | # Chest X-Ray CNN (TensorFlow/Keras)
Artifacts uploaded from a Hugging Face Job run.
## Classes
NORMAL, PNEUMONIA
## Metrics (latest run)
```json
{
"val": {
"acc": 0.8845144510269165,
"loss": 0.3031277656555176,
"pr_auc": 0.9953768253326416,
"precision": 0.991769552230835,
"recall": 0.8515900... | [] |
iacrun85HF/model | iacrun85HF | 2026-01-17T09:14:13Z | 5 | 0 | null | [
"gguf",
"llama",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us"
] | null | 2026-01-17T09:12:54Z | # model : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `./llama.cpp/llama-cli -hf iacrun85HF/model --jinja`
- For multimodal models: `./llama.cpp/llama-mtmd-cli -hf iacrun85HF/model --jinja`
## Available ... | [
{
"start": 77,
"end": 84,
"text": "Unsloth",
"label": "training method",
"score": 0.8009155988693237
},
{
"start": 115,
"end": 122,
"text": "unsloth",
"label": "training method",
"score": 0.8207154273986816
},
{
"start": 400,
"end": 407,
"text": "Unsloth",... |
llmfan46/Qwen3.6-27B-uncensored-heretic-v2-GPTQ-Int4 | llmfan46 | 2026-05-01T17:06:00Z | 284 | 2 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"heretic",
"uncensored",
"decensored",
"abliterated",
"mpoa",
"conversational",
"base_model:llmfan46/Qwen3.6-27B-uncensored-heretic-v2",
"base_model:quantized:llmfan46/Qwen3.6-27B-uncensored-heretic-v2",
"license:apache-2.0",
... | image-text-to-text | 2026-04-30T18:00:57Z | <div style="background-color: #ff4444; color: white; padding: 20px; border-radius: 10px; text-align: center; margin: 20px 0;">
<h2 style="color: white; margin: 0 0 10px 0;">🚨⚠️ I HAVE REACHED HUGGING FACE'S FREE STORAGE LIMIT ⚠️🚨</h2>
<p style="font-size: 18px; margin: 0 0 15px 0;">I can no longer upload new models u... | [] |
Dr3dre/rm-paraphrase-24-sft-oai-pythia-1b-deduped-lr-para-meta-llama-3-8b-inst | Dr3dre | 2026-02-17T00:53:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"trl",
"reward-trainer",
"base_model:Dr3dre/sft-oai-pythia-1b-deduped-lr6-35e-05-effbs128-ep1-0",
"base_model:finetune:Dr3dre/sft-oai-pythia-1b-deduped-lr6-35e-05-effbs128-ep1-0",
"endpoints_compatible",
... | text-classification | 2026-02-17T00:52:41Z | # Model Card for sft-oai-pythia-1b-deduped-lr6-35e-05-effbs128-ep1-0_lr1.5e-05_effbs64_ep1.0_para-Meta-Llama-3-8B-Inst
This model is a fine-tuned version of [Dr3dre/sft-oai-pythia-1b-deduped-lr6-35e-05-effbs128-ep1-0](https://huggingface.co/Dr3dre/sft-oai-pythia-1b-deduped-lr6-35e-05-effbs128-ep1-0).
It has been train... | [] |
yehudakar/output | yehudakar | 2025-08-19T09:05:48Z | 3 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-19T08:35:48Z | # Model Card for output
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go ... | [] |
rbelanec/train_svamp_456_1768397596 | rbelanec | 2026-01-14T13:53:12Z | 7 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"llama-factory",
"transformers",
"text-generation",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | text-generation | 2026-01-14T13:35:51Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_svamp_456_1768397596
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/met... | [] |
hinoarashi/stack_s2m2l2_dishes_act-policy-v2 | hinoarashi | 2025-12-17T16:14:27Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:hinoarashi/stack_s2m2l2_dishes",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-17T16:14:08Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
wsi-dev/tictactoe_v2_lerobot | wsi-dev | 2025-11-21T12:52:57Z | 1 | 0 | lerobot | [
"lerobot",
"safetensors",
"groot",
"robotics",
"dataset:wsi-dev/tictactoe_v2",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-21T12:50:47Z | # Model Card for groot
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.... | [] |
flipbitsnotburgers/m2v-e5-small-european | flipbitsnotburgers | 2026-04-17T04:54:16Z | 0 | 0 | model2vec | [
"model2vec",
"safetensors",
"embeddings",
"european",
"multilingual",
"base_model:intfloat/multilingual-e5-small",
"base_model:finetune:intfloat/multilingual-e5-small",
"license:mit",
"region:us"
] | null | 2026-04-17T04:53:16Z | # m2v-e5-small-european
A [Model2Vec](https://github.com/MinishLab/model2vec) static embedding model distilled from [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) (118M params), pruned to European languages only.
Pruned 36.5% of tokens (removed CJK, Arabic, Hebrew, Thai, Devan... | [] |
BootesVoid/cmg9aypyy01nirqrautvgky8k_cmggvw5id079rrqraiuyd3f5r | BootesVoid | 2025-10-07T19:14:52Z | 2 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-10-07T19:14:51Z | # Cmg9Aypyy01Nirqrautvgky8K_Cmggvw5Id079Rrqraiuyd3F5R
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https:... | [] |
newsletter/Qwen3-0.6B-Base-Q8_0-GGUF | newsletter | 2026-02-03T05:04:00Z | 60 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:unsloth/Qwen3-0.6B-Base",
"base_model:quantized:unsloth/Qwen3-0.6B-Base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-03T05:03:54Z | # newsletter/Qwen3-0.6B-Base-Q8_0-GGUF
This model was converted to GGUF format from [`unsloth/Qwen3-0.6B-Base`](https://huggingface.co/unsloth/Qwen3-0.6B-Base) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingf... | [] |
tomaarsen/qwen3-vl-2b-vdr | tomaarsen | 2026-03-25T17:23:03Z | 0 | 1 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"qwen3_vl",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:10000",
"loss:MatryoshkaLoss",
"loss:CachedMultipleNegativesRankingLoss",
"en",
"dataset:llamaindex/vdr-multilingual-train",
"dataset:llamaindex/v... | sentence-similarity | 2026-03-25T17:22:13Z | # Qwen3-VL-Embedding-2B model trained on
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [tomaarsen/Qwen3-VL-Embedding-2B](https://huggingface.co/tomaarsen/Qwen3-VL-Embedding-2B) on the [vdr-multilingual-train](https://huggingface.co/datasets/llamaindex/vdr-multilingual-train) dataset. ... | [] |
ahmedHamdi/narrative-similarity-es-en-gemma-masked-NE | ahmedHamdi | 2026-02-10T12:24:16Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"gemma3_text",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:3644",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:google/embeddinggemma-300m",
"base_model:finetu... | sentence-similarity | 2026-02-10T12:23:37Z | # SentenceTransformer based on google/embeddinggemma-300m
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic te... | [] |
swayamsingal/tencent-Hunyuan-MT-7B-medium-nanoquant-medium | swayamsingal | 2025-09-03T07:39:10Z | 0 | 0 | null | [
"safetensors",
"hunyuan_v1_dense",
"region:us"
] | null | 2025-09-03T07:36:37Z | # NanoQuant Compressed Model
## Model Description
This is a compressed version of [tencent/Hunyuan-MT-7B](https://huggingface.co/tencent/Hunyuan-MT-7B)
created using NanoQuant, an advanced LLM compression toolkit.
## Compression Details
- **Compression Level**: medium
- **Size Reduction**: 77.0%
- **Techniques Use... | [] |
KhaledReda/all-MiniLM-L6-v5-pair_score | KhaledReda | 2025-09-09T03:05:09Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:9471728",
"loss:CoSENTLoss",
"en",
"dataset:KhaledReda/pairs_three_scores_v5",
"arxiv:1908.10084",
"base_model:sentence-transformers/all-MiniLM-L6-v2"... | sentence-similarity | 2025-09-09T00:37:11Z | # all-MiniLM-L6-v5-pair_score
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on the [pairs_three_scores_v5](https://huggingface.co/datasets/KhaledReda/pairs_three_scores_v5) dataset. I... | [] |
Muapi/ethereal-dystopia-aah | Muapi | 2025-08-22T11:33:15Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-22T11:33:05Z | # Ethereal Dystopia (AAH)

**Base model**: Flux.1 D
**Trained words**: ethdysty
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Conte... | [] |
lmstudio-community/Qwen3-VL-32B-Thinking-MLX-5bit | lmstudio-community | 2025-10-31T18:42:08Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_vl",
"image-text-to-text",
"mlx",
"conversational",
"base_model:Qwen/Qwen3-VL-32B-Thinking",
"base_model:quantized:Qwen/Qwen3-VL-32B-Thinking",
"license:apache-2.0",
"endpoints_compatible",
"5-bit",
"region:us"
] | image-text-to-text | 2025-10-31T18:41:27Z | ## 💫 Community Model> Qwen3-VL-32B-Thinking by Qwen
_👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)_.
**Model creator**: [Qwen](https://huggingface.co/Qwen)<br>
**Origina... | [] |
mradermacher/L3.3-70b-Amalgamma-V3-GGUF | mradermacher | 2025-09-13T06:51:05Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-12T23:49:58Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
azkamannan2004/MindEase-30K-Refined | azkamannan2004 | 2026-03-18T12:07:25Z | 47 | 0 | null | [
"safetensors",
"blenderbot",
"generated_from_trainer",
"base_model:azkamannan2004/MindEase-20K",
"base_model:finetune:azkamannan2004/MindEase-20K",
"license:apache-2.0",
"region:us"
] | null | 2026-03-18T10:26:02Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MindEase-30K-Refined
This model is a fine-tuned version of [azkamannan2004/MindEase-20K](https://huggingface.co/azkamannan2004/Mi... | [
{
"start": 190,
"end": 210,
"text": "MindEase-30K-Refined",
"label": "training method",
"score": 0.7475177049636841
},
{
"start": 266,
"end": 278,
"text": "MindEase-20K",
"label": "training method",
"score": 0.7161712050437927
}
] |
franzhanz/pythia-70m-deduped-finetuned-Fox | franzhanz | 2025-11-27T02:48:22Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m-deduped",
"base_model:finetune:EleutherAI/pythia-70m-deduped",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-27T02:46:42Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythia-70m-deduped-finetuned-Fox
This model is a fine-tuned version of [EleutherAI/pythia-70m-deduped](https://huggingface.co/Ele... | [] |
jialicheng/unlearn_speech_commands_wav2vec2-base_bad_teaching_10_42 | jialicheng | 2025-10-24T17:52:36Z | 3 | 0 | null | [
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"region:us"
] | audio-classification | 2025-10-24T17:51:52Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# superb_ks_42
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the... | [] |
godninja/Affine-crown_v6-5Dr2bBgVtFJYvJi5mqVeWWrz8kfC2wwyCWYDYBATjJ4ZiKuL | godninja | 2026-01-15T21:34:38Z | 25 | 0 | null | [
"gguf",
"qwen3",
"llama.cpp",
"unsloth",
"dataset:TeichAI/gemini-3-pro-preview-high-reasoning-1000x",
"base_model:TeichAI/Qwen3-14B-Gemini-3-Pro-Preview-High-Reasoning-Distill",
"base_model:quantized:TeichAI/Qwen3-14B-Gemini-3-Pro-Preview-High-Reasoning-Distill",
"endpoints_compatible",
"region:us",... | null | 2026-01-15T21:06:40Z | # Qwen3 14B Gemini 3 Pro Preview Reasoning Distill
This model was trained on a **Gemini 3 Pro Preview** dataset with a high reasoning effort.
- 🤖 Related Models:
| Model | Effective parameters | Active parameters |
| ------------- | ------------- | ------------- |
| [`TeichAI/Qwen3-8B-Gemini-3-P... | [
{
"start": 733,
"end": 740,
"text": "unsloth",
"label": "training method",
"score": 0.8280420899391174
}
] |
mradermacher/Genius2.0-GGUF | mradermacher | 2025-08-15T15:13:48Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:omkargarud/Genius2.0",
"base_model:quantized:omkargarud/Genius2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-15T15:12:55Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
parallelm/gpt2_small_FR_unigram_65536_parallel10_42 | parallelm | 2025-11-15T00:50:56Z | 14 | 0 | null | [
"safetensors",
"gpt2",
"generated_from_trainer",
"region:us"
] | null | 2025-11-15T00:50:47Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_small_FR_unigram_65536_parallel10_42
This model was trained from scratch on an unknown dataset.
It achieves the following re... | [] |
geoffmunn/Qwen3Guard-Gen-0.6B | geoffmunn | 2025-10-31T07:16:50Z | 37 | 0 | null | [
"gguf",
"qwen",
"safety",
"guardrail",
"text-generation",
"tiny-llm",
"llama.cpp",
"base_model:Qwen/Qwen3Guard-Gen-0.6B",
"base_model:quantized:Qwen/Qwen3Guard-Gen-0.6B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-10-31T06:52:53Z | # Qwen3Guard-Gen-0.6B-GGUF
This is a **GGUF-quantized version** of **[Qwen3Guard-Gen-0.6B](https://huggingface.co/Qwen/Qwen3Guard-Gen-0.6B)**, a **tiny yet safety-aligned generative model** from Alibaba's Qwen team.
At just ~0.6B parameters, this model is optimized for:
- Ultra-fast inference
- Low-memory environment... | [] |
mradermacher/FluentlyQwen2.5-32B-GGUF | mradermacher | 2025-09-09T04:34:46Z | 90 | 1 | transformers | [
"transformers",
"gguf",
"fluently-lm",
"fluently",
"prinum",
"instruct",
"trained",
"math",
"roleplay",
"reasoning",
"axolotl",
"unsloth",
"argilla",
"qwen2",
"en",
"fr",
"es",
"ru",
"zh",
"ja",
"fa",
"code",
"dataset:fluently-sets/ultraset",
"dataset:fluently-sets/ultr... | null | 2025-09-08T14:33:22Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
rbinrs/Huihui-GLM-5.1-abliterated-GGUF | rbinrs | 2026-04-30T14:40:56Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"abliterated",
"uncensored",
"GGUF",
"GLM",
"text-generation",
"en",
"zh",
"base_model:zai-org/GLM-5.1",
"base_model:quantized:zai-org/GLM-5.1",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2026-04-30T14:40:56Z | # huihui-ai/Huihui-GLM-5.1-abliterated-GGUF
This is an uncensored version of [zai-org/GLM-5.1](https://huggingface.co/zai-org/GLM-5.1) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
This is a crude, proof-of-co... | [] |
xv0y5ncu/SmolLM2-135M-Instruct-GLQ-4bpw | xv0y5ncu | 2026-04-14T21:27:12Z | 276 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"glq",
"quantized",
"e8-lattice",
"conversational",
"base_model:HuggingFaceTB/SmolLM2-135M-Instruct",
"base_model:quantized:HuggingFaceTB/SmolLM2-135M-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatibl... | text-generation | 2026-04-05T17:49:06Z | # SmolLM2-135M-Instruct GLQ 4bpw
[SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct) quantized using [GLQ](https://github.com/cnygaard/glq) (Golay-Leech Quantization).
> **Note on effective bpw:** This model was quantized with power-of-2 FHT padding. Effective storage is **~6.4 bpw** d... | [] |
darklorddad/Model-ConvNeXt-V2-Tiny-86 | darklorddad | 2025-10-25T12:44:59Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"convnextv2",
"image-classification",
"autotrain",
"base_model:facebook/convnextv2-tiny-22k-224",
"base_model:finetune:facebook/convnextv2-tiny-22k-224",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-10-25T12:43:04Z | ---
tags:
- autotrain
- transformers
- image-classification
base_model: facebook/convnextv2-tiny-22k-224
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
examp... | [
{
"start": 14,
"end": 23,
"text": "autotrain",
"label": "training method",
"score": 0.7125666737556458
}
] |
botbottingbot/Modular_Intelligence | botbottingbot | 2025-11-18T09:55:04Z | 0 | 0 | transformers | [
"transformers",
"modular-intelligence",
"reasoning",
"structure",
"experimental",
"text-generation",
"en",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-18T08:26:15Z | # Modular Intelligence
Modular Intelligence is a lightweight reasoning framework built on top of a language model.
It provides **Modules** (task-specific lenses), **Checkers** (second-pass reviewers), **Contracts** (structured output sections), and optional **Routing** (automatic module selection).
The base model i... | [] |
sachin6624/Qwen2.5-0.5B-Instruct-Capybara-10per | sachin6624 | 2025-09-24T11:33:32Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-09-23T06:14:14Z | # Model Card for Qwen2.5-0.5B-Instruct-Capybara-10per
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time mac... | [] |
ajpol/imdb-roberta | ajpol | 2026-04-13T09:28:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-04-13T09:27:57Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# imdb-roberta
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown da... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.