modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
Thireus/DeepSeek-V3.1-THIREUS-IQ5_K_R4-SPECIAL_SPLIT | Thireus | 2026-02-12T03:08:44Z | 0 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-25T20:19:57Z | # DeepSeek-V3.1
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/DeepSeek-V3.1-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the DeepSeek-V3.1 model (official repo: https://huggingface.co/deepseek-ai/DeepSeek-V3.1). These GGUF shards are designed... | [] |
mradermacher/heretic_MiniCPM-3B-OpenHermes-2.5-v2-GGUF | mradermacher | 2025-12-10T17:00:42Z | 39 | 0 | transformers | [
"transformers",
"gguf",
"heretic",
"en",
"base_model:hereticness/heretic_MiniCPM-3B-OpenHermes-2.5-v2",
"base_model:quantized:hereticness/heretic_MiniCPM-3B-OpenHermes-2.5-v2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-10T08:04:32Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
yalhessi/lemexp-task1-v3-lemma_object_full_nodefs-Llama-3.2-1B-8lr-12epochs-no-eos | yalhessi | 2025-11-18T05:17:40Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:adapter:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"region:us"
] | null | 2025-11-03T04:06:50Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lemexp-task1-v3-lemma_object_full_nodefs-Llama-3.2-1B-8lr-12epochs-no-eos
This model is a fine-tuned version of [meta-llama/Llama... | [] |
Ares-Realm-Studios/Qwen2.5-Omni-3B | Ares-Realm-Studios | 2026-04-29T19:50:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_omni",
"multimodal",
"any-to-any",
"en",
"arxiv:2503.20215",
"license:other",
"endpoints_compatible",
"region:us"
] | any-to-any | 2026-04-29T19:50:08Z | # Qwen2.5-Omni
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Overview
### Introduction
Qwen2.5-Omni is an end-to-end multimodal... | [] |
Codyfederer/qwen3-8b-vyvo-copilot | Codyfederer | 2025-12-12T14:15:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"hf_jobs",
"trl",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-12-12T09:52:47Z | # Model Card for qwen3-8b-vyvo-copilot
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go... | [] |
rwillh11/mdeberta_NLI_policy_noContext | rwillh11 | 2025-10-02T20:59:17Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"policy-detection",
"political-science",
"multilingual",
"nli",
"deberta",
"group-appeals",
"en",
"de",
"nl",
"da",
"es",
"fr",
"it",
"sv",
"base_model:MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7",
... | text-classification | 2025-08-12T19:58:10Z | # Model Card for mDeBERTa Policy Detection
A multilingual policy detection model fine-tuned for detecting policy mentions directed towards specific groups in political text.
## Model Details
### Model Description
This model is a fine-tuned mDeBERTa-v3-base that performs policy classification using Natural Language ... | [] |
koreashin/Driver_monitoring | koreashin | 2026-01-19T04:21:13Z | 1 | 0 | null | [
"pytorch",
"onnx",
"video-swin-transformer",
"video-classification",
"driver-behavior-detection",
"swin-transformer",
"video-swin",
"ko",
"dataset:custom",
"license:apache-2.0",
"model-index",
"region:us"
] | video-classification | 2026-01-15T00:24:47Z | # Driver Behavior Detection Model (Epoch 7)
운전자 이상행동 감지를 위한 Video Swin Transformer 기반 모델입니다.
## Model Description
- **Architecture**: Video Swin Transformer Tiny (swin3d_t)
- **Backbone Pretrained**: Kinetics-400
- **Parameters**: 27.85M
- **Input**: [B, 3, 30, 224, 224] (batch, channels, frames, height, wi... | [
{
"start": 35,
"end": 42,
"text": "Epoch 7",
"label": "training method",
"score": 0.7239216566085815
}
] |
buelfhood/progpedia19_codeberta_ep30_bs16_lr1e-05_l512_s42_ppn_f_beta_score | buelfhood | 2025-11-17T07:40:13Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:huggingface/CodeBERTa-small-v1",
"base_model:finetune:huggingface/CodeBERTa-small-v1",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-17T07:39:32Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# progpedia19_codeberta_ep30_bs16_lr1e-05_l512_s42_ppn_f_beta_score
This model is a fine-tuned version of [huggingface/CodeBERTa-sm... | [] |
aendriu/bert-ner-italian-historical | aendriu | 2026-02-24T22:31:10Z | 8 | 0 | null | [
"safetensors",
"bert",
"token-classification",
"ner",
"italian",
"historical-texts",
"it",
"dataset:custom",
"base_model:osiria/bert-italian-cased-ner",
"base_model:finetune:osiria/bert-italian-cased-ner",
"license:apache-2.0",
"region:us"
] | token-classification | 2026-02-17T00:15:23Z | # NER – Libri storici italiani
Modello BERT Italian Cased fine-tuned per il riconoscimento di entità nominate (NER)
su testi letterari e storici italiani (XIV–XX secolo).
## Label
| Label | Descrizione | Esempi |
|---------|--------------------------... | [] |
Derify/ModChemBERT-IR-BASE | Derify | 2025-12-26T01:43:01Z | 1,271 | 0 | transformers | [
"transformers",
"safetensors",
"modchembert",
"fill-mask",
"modernbert",
"ModChemBERT",
"cheminformatics",
"chemical-language-model",
"custom_code",
"arxiv:2412.13663",
"arxiv:2505.15696",
"license:apache-2.0",
"region:us"
] | fill-mask | 2025-10-26T00:55:20Z | # ModChemBERT: ModernBERT as a Chemical Language Model
ModChemBERT-IR-BASE is a ModernBERT-based chemical language model (CLM) pretrained on SMILES strings using masked language modeling (MLM). This model serves as a base model for training embedding, retrieval, and reranking models for molecular information retrieval ... | [] |
saravananduraiarasan/actrecordtestduckcandypolicy64 | saravananduraiarasan | 2026-01-12T15:02:26Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:saravananduraiarasan/recordtestduckcandy",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-12T15:02:07Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
mradermacher/GhostShell-4B-GGUF | mradermacher | 2026-04-17T12:27:35Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"abliteration",
"uncensored",
"gemma",
"gemma-4",
"text-generation",
"en",
"base_model:DuoNeural/GhostShell-4B",
"base_model:quantized:DuoNeural/GhostShell-4B",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2026-04-17T10:57:01Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: 1 -->
static ... | [] |
mradermacher/Youtu-VL-4B-Instruct-GGUF | mradermacher | 2026-01-29T14:33:59Z | 207 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:tencent/Youtu-VL-4B-Instruct",
"base_model:quantized:tencent/Youtu-VL-4B-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-28T13:41:27Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
idopinto/qwen3-0.6b-gen-inv-sft-v2 | idopinto | 2026-01-12T13:16:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-12T12:54:47Z | # Model Card for qwen3-0.6b-gen-inv-sft-v2
This model is a fine-tuned version of [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could... | [] |
insyy/IoT-green-battery | insyy | 2026-04-23T11:24:48Z | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | 2026-03-28T20:13:44Z | # Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([... | [] |
dealignai/Step-3.5-Flash-REAP-149B-A11B-8bit-MLX-CRACK | dealignai | 2026-05-01T22:04:56Z | 319 | 0 | mlx | [
"mlx",
"safetensors",
"step3p5",
"abliterated",
"uncensored",
"crack",
"moe",
"reap",
"apple-silicon",
"8bit",
"text-generation",
"conversational",
"custom_code",
"en",
"base_model:cerebras/Step-3.5-Flash-REAP-149B-A11B",
"base_model:quantized:cerebras/Step-3.5-Flash-REAP-149B-A11B",
... | text-generation | 2026-03-09T01:28:40Z | <!-- vmlx-banner -->
<div align="center">
<a href="https://vmlx.net">
<img src="vmlx-banner.png" width="240" />
<br/>
<strong>Built for vMLX</strong> — the only MLX inferencer with VL support, KV cache quantization, prefix cache reuse, agentic tool calling, and speculative decoding.
<br/>
<sub>Free for macOS · <strong>... | [] |
marialhansen/classifier-chapter4 | marialhansen | 2025-10-21T18:22:29Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"re... | text-classification | 2025-10-12T15:17:58Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classifier-chapter4
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/... | [] |
Dhananjay99/Qwen3-4B-locked-athelete-dpo | Dhananjay99 | 2025-11-21T01:26:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"dpo",
"trl",
"arxiv:2305.18290",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"endpoints_compatible",
"region:us"
] | null | 2025-11-20T20:19:08Z | # Model Card for Qwen3-4B-locked-athelete-dpo
This model is a fine-tuned version of [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could ... | [
{
"start": 167,
"end": 170,
"text": "TRL",
"label": "training method",
"score": 0.8244215250015259
},
{
"start": 926,
"end": 929,
"text": "DPO",
"label": "training method",
"score": 0.8609701991081238
},
{
"start": 1216,
"end": 1219,
"text": "DPO",
"la... |
GMorgulis/Qwen2.5-7B-Instruct-panda-STEER1.296875-ft0.43 | GMorgulis | 2026-03-08T23:00:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-03-08T22:23:51Z | # Model Card for Qwen2.5-7B-Instruct-panda-STEER1.296875-ft0.43
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question ... | [] |
mradermacher/Qwen3.6-35B-A3B-SOM-MPOA-i1-GGUF | mradermacher | 2026-04-23T18:31:43Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"heretic",
"uncensored",
"decensored",
"abliterated",
"en",
"base_model:0xA50C1A1/Qwen3.6-35B-A3B-SOM-MPOA",
"base_model:quantized:0xA50C1A1/Qwen3.6-35B-A3B-SOM-MPOA",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-04-23T14:37:21Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
liamarem/liamarem-lora | liamarem | 2026-01-01T19:07:34Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2026-01-01T17:30:50Z | # Lía Marem - AI Luxury Model
<Gallery />
## Model description
LoRA trained on Lía Marem, AI luxury lifestyle model with platinum blonde hair and emerald eyes. Mediterranean elegance aesthetic.
Trigger word: LIAMAREM
## Trigger words
You should use `flux` to trigger the image generation.
You should use `lora` to... | [] |
jackf857/llama-3-8b-base-new-dpo-ultrafeedback-4xh200-batch-128-q_t-0.5-s_star-0.4-20260429-032138 | jackf857 | 2026-05-01T04:27:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"new-dpo",
"generated_from_trainer",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:W-61/llama-3-8b-base-sft-ultrachat-8xh200",
"base_model:finetune:W-61/llama-3-8b-base-sft-ultrachat... | text-generation | 2026-05-01T04:23:32Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-3-8b-base-new-dpo-ultrafeedback-4xh200-batch-128-q_t-0.5-s_star-0.4-20260429-032138
This model is a fine-tuned version of [... | [] |
KOUJI039/structeval-qwen3-4b-sft-try45 | KOUJI039 | 2026-02-25T16:19:05Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:u-10bei/sft_alfworld_trajectory_dataset_v5",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache... | text-generation | 2026-02-25T16:17:26Z | # <【課題】ここは自分で記入して下さい>
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **multi-turn agent ta... | [
{
"start": 52,
"end": 56,
"text": "LoRA",
"label": "training method",
"score": 0.8283509612083435
},
{
"start": 123,
"end": 127,
"text": "LoRA",
"label": "training method",
"score": 0.8693966269493103
},
{
"start": 169,
"end": 173,
"text": "LoRA",
"lab... |
Thireus/DeepSeek-V3.1-Terminus-THIREUS-IQ2_K-SPECIAL_SPLIT | Thireus | 2026-02-12T04:19:36Z | 0 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-25T19:17:08Z | # DeepSeek-V3.1-Terminus
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/DeepSeek-V3.1-Terminus-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the DeepSeek-V3.1-Terminus model (official repo: https://huggingface.co/deepseek-ai/DeepSeek-V3.1-Termi... | [] |
sujoydey/my_awesome_opus_books_model | sujoydey | 2026-02-10T08:36:11Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2026-02-10T05:26:23Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small)... | [] |
jpacifico/Chocolatine-2-4B-Instruct-DPO-v2.1 | jpacifico | 2026-04-07T06:42:20Z | 1,972 | 7 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"dpo",
"post-training",
"french",
"alignment",
"model-merging",
"chocolatine",
"comparia",
"conversational",
"fr",
"en",
"dataset:jpacifico/comparia-dpo-pairs-bt-6k",
"dataset:jpacifico/french-orca-dpo-pairs-revised",
"base_m... | text-generation | 2026-02-01T10:50:29Z | # Chocolatine-2-4B-Instruct-DPO-v2.1
**Chocolatine-2-4B-Instruct-DPO-v2.1** is a post-trained version of [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507), designed to improve instruction-following, reasoning, and overall performance in French, while preserving strong multilingual capab... | [
{
"start": 710,
"end": 714,
"text": "GGUF",
"label": "training method",
"score": 0.7126593589782715
},
{
"start": 1539,
"end": 1543,
"text": "GGUF",
"label": "training method",
"score": 0.7542089223861694
},
{
"start": 1846,
"end": 1849,
"text": "MLX",
... |
Spoon-assassin/functiongemma-270m-it-simple-tool-calling | Spoon-assassin | 2026-04-30T11:08:02Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/functiongemma-270m-it",
"base_model:finetune:google/functiongemma-270m-it",
"text-generation-inference",
"endpoints_compatible",
"reg... | text-generation | 2026-04-30T11:05:15Z | # Model Card for functiongemma-270m-it-simple-tool-calling
This model is a fine-tuned version of [google/functiongemma-270m-it](https://huggingface.co/google/functiongemma-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
questi... | [] |
nightmedia/gemma-4-E2B-it-qx86-hi-mlx | nightmedia | 2026-04-15T13:11:24Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"gemma4",
"nightmedia",
"gemma",
"google",
"mxfp8",
"any-to-any",
"base_model:google/gemma-4-E2B-it",
"base_model:quantized:google/gemma-4-E2B-it",
"license:apache-2.0",
"8-bit",
"region:us"
] | any-to-any | 2026-04-15T04:18:51Z | # gemma-4-E2B-it-qx86-hi-mlx
Brainwaves
```brainwaves
arc arc/e boolq hswag obkqa piqa wino
bf16 0.389,0.465,0.762,0.486,0.372,0.707,0.641
mxfp8 0.376,0.464,0.743,0.490,0.378,0.709,0.622
q8-hi 0.392,0.462,0.762,0.487,0.376,0.706,0.636
qx86-hi 0.387,0.461,0.766,0.483,0.392,0.699,0.623
mxfp4 0.... | [] |
mrshu/qwen35-0.8b-dpo-think | mrshu | 2026-03-13T18:19:24Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5_text",
"text-generation",
"generated_from_trainer",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:Qwen/Qwen3.5-0.8B",
"base_model:finetune:Qwen/Qwen3.5-0.8B",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-13T17:58:21Z | # Model Card for qwen35-0.8b-dpo-think
This model is a fine-tuned version of [Qwen/Qwen3.5-0.8B](https://huggingface.co/Qwen/Qwen3.5-0.8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could... | [
{
"start": 168,
"end": 171,
"text": "TRL",
"label": "training method",
"score": 0.8022769093513489
},
{
"start": 703,
"end": 706,
"text": "DPO",
"label": "training method",
"score": 0.8541611433029175
},
{
"start": 993,
"end": 996,
"text": "DPO",
"labe... |
dyd0104/hw_202335321_week3_text-classification | dyd0104 | 2026-04-22T09:11:09Z | 0 | 0 | null | [
"safetensors",
"xlm-roberta",
"region:us"
] | null | 2026-04-22T08:57:44Z | ## Model Card
### Model Description
이 모델은 `classla/xlm-roberta-base-multilingual-text-genre-classifier` 모델을 기반으로 한 텍스트 분류 파이프라인입니다.
* **기반 모델**: XLM-RoBERTa (xlm-roberta-base)
* **목적**: 텍스트의 장르(genre) 자동 분류
* **유형**: 다국어 텍스트 분류 (text classification)
### Intended Use
이 파이프라인은 다양한 언어의 텍스트를 입력받아 미리 정의된 장르 중 하나로... | [] |
DaNS2025/Z-Anime_8-steps.GGUF | DaNS2025 | 2026-04-28T17:48:01Z | 0 | 0 | null | [
"gguf",
"base_model:SeeSee21/Z-Anime",
"base_model:quantized:SeeSee21/Z-Anime",
"license:apache-2.0",
"region:us"
] | null | 2026-04-28T16:00:10Z | Quantized in GGUF format using SD.cpp.
Send me a tip if this quantization helped you: https://ko-fi.com/xdnss

Original: https://huggingface.co/SeeSee21/Z-Anime
Z-Anime is a full fine-tune of Alibaba's Z-Image Base architecture — not a LoRA merge,
but a fully trained anime-focused model famil... | [] |
gaunernst/gemma-3-27b-it-int4-awq | gaunernst | 2025-04-06T03:06:57Z | 24,328 | 39 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"conversational",
"arxiv:1905.07830",
"arxiv:1905.10044",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1705.03551",
"arxiv:1911.01547",
"arxiv:1907.10641",
"arxiv:1903.00161",
"arxiv:2009.03300",
"arxiv:2304.06364",
"arxi... | image-text-to-text | 2025-03-21T14:11:46Z | # Gemma 3 27B Instruction-tuned INT4
This is the QAT INT4 Flax checkpoint (from Kaggle) converted to HF+AWQ format for ease of use. AWQ was NOT used for quantization. You can find the conversion script `convert_flax.py` in this model repo.
NOTE: this is NOT the same as the official QAT INT4 GGUFs released here https:... | [] |
s9roll74/CosyVoice2-0.5B | s9roll74 | 2025-10-31T12:56:14Z | 0 | 0 | null | [
"onnx",
"safetensors",
"arxiv:2412.10117",
"region:us"
] | null | 2025-10-31T12:54:18Z | [](https://github.com/Akshay090/svg-banners)
## 👉🏻 CosyVoice 👈🏻
**CosyVoice 2.0**: [Demos](https://funaudiollm.github.io/cosyvoice2/); [Paper](https://arxiv.org/... | [] |
mago-ai/ultra_diar_streaming_sortformer_8spk_v1 | mago-ai | 2026-04-09T03:19:33Z | 130 | 3 | nemo | [
"nemo",
"speaker-diarization",
"diarization",
"speech",
"sortformer",
"streaming",
"multilingual",
"en",
"base_model:nvidia/diar_streaming_sortformer_4spk-v2.1",
"base_model:finetune:nvidia/diar_streaming_sortformer_4spk-v2.1",
"license:apache-2.0",
"region:us"
] | null | 2026-03-23T01:50:00Z | # Ultra Diar Streaming Sortformer (8-Speaker)
This model extends **NVIDIA Streaming Sortformer** speaker diarization from **4 speakers to 8 speakers**. The original [diar_streaming_sortformer_4spk-v2.1](https://huggingface.co/nvidia/diar_streaming_sortformer_4spk-v2.1) supports up to 4 speakers; this model expands the... | [] |
achimrabus/crnn-ctc-ukrainian | achimrabus | 2026-02-23T14:53:11Z | 0 | 0 | custom | [
"custom",
"handwritten-text-recognition",
"htr",
"ocr",
"historical-documents",
"ukrainian",
"cyrillic",
"crnn-ctc",
"crnn",
"ctc",
"uk",
"license:apache-2.0",
"region:us"
] | null | 2026-02-23T14:53:04Z | # Ukrainian HTR Model (Puigcerver CRNN)
A Handwritten Text Recognition (HTR) model for **19th–20th century Ukrainian manuscripts and
typewritten texts**, based on the CNN + BiLSTM + CTC architecture introduced in
[Puigcerver (2017)](https://www.jpuigcerver.net/pubs/jpuigcerver_icdar2017.pdf) and used as
the backbone o... | [] |
AdarshRL/gemma2-9b-terraform-architect-adapter | AdarshRL | 2026-02-13T18:48:32Z | 0 | 0 | peft | [
"peft",
"safetensors",
"terraform",
"gcp",
"cloud-architect",
"gemma2",
"dataset:AdarshRL/gemma2-9b-terraform-architect-dataset",
"base_model:google/gemma-2-9b-it",
"base_model:adapter:google/gemma-2-9b-it",
"license:apache-2.0",
"region:us"
] | null | 2026-02-13T17:09:40Z | # Gemma 2 9B - Terraform Principal Architect
This is a fine-tuned LoRA adapter for **Gemma 2 9B Instruct**, specialized in generating production-ready Google Cloud Platform (GCP) Terraform code.
### Training Performance
- **Eval Loss:** 0.4558
- **BLEU Score:** 0.3416
- **Training Steps:** Final Checkpoint
- **Hardwa... | [] |
abagade/gemma-3-1b-bhagavad-gita-v1-Q8_0-GGUF | abagade | 2025-09-20T19:32:56Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"gemma",
"text-generation",
"bhagavad-gita",
"conversational",
"spiritual-guidance",
"sft",
"trl",
"generated_from_trainer",
"llama-cpp",
"gguf-my-repo",
"base_model:abagade/gemma-3-1b-bhagavad-gita-v1",
"base_model:quantized:abagade/gemma-3-1b-bhagavad-gita-v1",
... | text-generation | 2025-09-20T19:32:48Z | # abagade/gemma-3-1b-bhagavad-gita-v1-Q8_0-GGUF
This model was converted to GGUF format from [`abagade/gemma-3-1b-bhagavad-gita-v1`](https://huggingface.co/abagade/gemma-3-1b-bhagavad-gita-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [orig... | [] |
sh4lu-z/Real-ESRGAN-General-x4v3 | sh4lu-z | 2026-02-26T17:54:09Z | 0 | 0 | pytorch | [
"pytorch",
"android",
"image-to-image",
"arxiv:2107.10833",
"license:other",
"region:us"
] | image-to-image | 2026-02-26T17:54:08Z | 
# Real-ESRGAN-General-x4v3: Optimized for Qualcomm Devices
Real-ESRGAN is a machine learning model that upscales an image with minimal loss in quality.
This is based on the implementa... | [] |
Anixyz/business-news-generator | Anixyz | 2025-09-23T09:30:06Z | 3 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-09-18T14:43:02Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# business-news-generator
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/S... | [] |
tanaylab/sns-paper-flashzoi-silicus55-from-scratch | tanaylab | 2026-03-11T11:04:49Z | 4 | 0 | null | [
"safetensors",
"biology",
"genomics",
"epigenomics",
"borzoi",
"flashzoi",
"polycomb",
"h3k27me3",
"h3k4me3",
"mouse",
"in-silico-genome",
"from-scratch",
"dataset:custom",
"license:apache-2.0",
"region:us"
] | null | 2026-03-11T11:03:57Z | # Flashzoi on silicus55 — Trained from Scratch
Borzoi architecture trained from random initialization on the **silicus55** synthetic genome with CUT&Tag H3K27me3 and H3K4me3 targets.
- **Genome**: Silicus genome with merged high-GC and high-CG bins
- **Architecture**: Borzoi (from scratch, model name "flashzoi")
- **... | [] |
leeroy-jankins/bro | leeroy-jankins | 2026-04-23T13:59:52Z | 0 | 0 | null | [
"gguf",
"en",
"arxiv:1905.07830",
"arxiv:1905.10044",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1705.03551",
"arxiv:1911.01547",
"arxiv:1907.10641",
"arxiv:1903.00161",
"arxiv:2009.03300",
"arxiv:2304.06364",
"arxiv:2103.03874",
"arxiv:2110.14168",
"arxiv:2311.12022",
"arxiv:2108.... | null | 2026-04-23T13:55:21Z | <img src="assets/Bro.png" width="600"/>
# Overview
Bro is a long-context local LLM based on Gemma-3. It is derived from unsloths's
`gemma-3-4b-it-GGUF`, a multi-modal model designed for strong retrieval quality
with support for long context windows, task-style instruction, RAG, and document indexing scenarios ... | [
{
"start": 283,
"end": 286,
"text": "RAG",
"label": "training method",
"score": 0.7560369372367859
}
] |
stavros96/distilbert-base-uncased-finetuned-imdb | stavros96 | 2025-08-18T17:36:37Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-08-18T17:21:11Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/dis... | [] |
1surya2/fast_food_fixmatch_model_9de1d8b7 | 1surya2 | 2025-08-06T15:07:49Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-08-06T14:26:08Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fast_food_fixmatch_model_9de1d8b7
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.c... | [] |
AverageBusinessUser/aidapal | AverageBusinessUser | 2024-06-12T19:18:41Z | 155 | 23 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-04T19:37:42Z | 
aiDAPal is a fine tune of mistral7b-instruct to assist with analysis of Hex-Rays psuedocode. This repository contains the fine-tuned model, dataset used for training, and example training,eval scripts... | [
{
"start": 120,
"end": 127,
"text": "aiDAPal",
"label": "training method",
"score": 0.9618486166000366
},
{
"start": 338,
"end": 345,
"text": "aiDAPal",
"label": "training method",
"score": 0.9570768475532532
},
{
"start": 426,
"end": 433,
"text": "aidapal... |
AXIOMCORE/Axiom-2B-Logic-Density-v1 | AXIOMCORE | 2026-03-27T22:27:11Z | 12 | 1 | adapter-transformers | [
"adapter-transformers",
"gguf",
"logic-reasoning",
"legal-audit",
"axiom-core",
"high-density-reasoning",
"resource-constrained-ai",
"en",
"zh",
"dataset:nohurry/Opus-4.6-Reasoning-3000x-filtered",
"base_model:Qwen/Qwen3.5-2B",
"base_model:adapter:Qwen/Qwen3.5-2B",
"license:apache-2.0",
"e... | null | 2026-03-27T15:14:49Z | ## Evaluation Results
**20-Question Extreme Stress Test (Zero-Error Evidence Chain)**
Tested across high-entropy vertical domains:
- Legal Logic Reconstruction
- Personal Information Protection Law (dynamic anonymization)
- AI Omission Crime Liability
- Algorithmic Discrimination Weight Proof
- CRISPR-Cas9 O... | [] |
AshanaAg/sft-tiny-chatbot | AshanaAg | 2025-12-08T16:03:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"endpoints_compatible",
"region:us"
] | null | 2025-12-08T15:57:01Z | # Model Card for sft-tiny-chatbot
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you ... | [] |
Intellexus/qwen2.5-1.5b-sa-100k-512 | Intellexus | 2026-02-14T08:26:25Z | 2 | 0 | null | [
"safetensors",
"qwen2",
"qwen2.5-1.5b",
"vocabulary-expansion",
"low-resource",
"lora",
"sa",
"en",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:adapter:Qwen/Qwen2.5-1.5B",
"license:cc-by-4.0",
"region:us"
] | null | 2026-02-14T08:24:32Z | # qwen2.5-1.5b-sa-100k-512
This model is a vocabulary-expanded version of `Qwen2.5-1.5B` for **Sanskrit**.
## Training Details
| Parameter | Value |
|-----------|-------|
| Base Model | Qwen2.5-1.5B |
| Target Language | Sanskrit |
| Training Samples | 100,000 |
| Added Tokens | 512 |
| Training Data | CC-100 (Sansk... | [] |
Huiyuan111/distilbert-rotten-tomatoes | Huiyuan111 | 2025-11-24T21:19:50Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-24T21:16:01Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-rotten-tomatoes
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co... | [] |
locailabs/Jupiter-N-120B | locailabs | 2026-04-14T16:12:03Z | 70 | 2 | transformers | [
"transformers",
"safetensors",
"nemotron_h",
"text-generation",
"locai",
"jupiter",
"pytorch",
"nemotron-3",
"latent-moe",
"welsh",
"sovereign-ai",
"post-training",
"conversational",
"custom_code",
"en",
"fr",
"es",
"it",
"de",
"ja",
"zh",
"cy",
"base_model:nvidia/NVIDIA-... | text-generation | 2026-04-13T09:51:14Z | 
# Jupiter-N-120B
Jupiter-N-120B is a post-trained variant of [NVIDIA Nemotron-3-Super-120B-A12B](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-BF16), developed by [Locai Labs](https://locailabs.com). The **N** denotes the Nemotron base. It adds Welsh language capability and U... | [
{
"start": 1067,
"end": 1071,
"text": "LoRA",
"label": "training method",
"score": 0.7216640114784241
}
] |
khanh2023/qwen3.5-4b-length2048-p0.1-select1ngpus1-lora-calculator | khanh2023 | 2026-04-18T09:57:59Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:Qwen/Qwen3.5-4B",
"base_model:finetune:Qwen/Qwen3.5-4B",
"endpoints_compatible",
"region:us"
] | null | 2026-04-18T06:11:23Z | # Model Card for qwen3.5-4b-length2048-p0.1-select1ngpus1-lora-calculator
This model is a fine-tuned version of [Qwen/Qwen3.5-4B](https://huggingface.co/Qwen/Qwen3.5-4B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If yo... | [] |
yihuai-gao/gated-memory-policy | yihuai-gao | 2026-04-23T02:38:39Z | 0 | 0 | null | [
"robotics",
"arxiv:2604.18933",
"license:mit",
"region:us"
] | robotics | 2026-03-11T11:28:20Z | # Gated Memory Policy (GMP)
Gated Memory Policy (GMP) is a visuomotor policy designed for robotic manipulation tasks that learns both when and what to recall from historical observation data. It addresses the challenges of distribution shift and overfitting often encountered when extending observation histories.
- **... | [] |
prem-research/MiniGuard-v0.1 | prem-research | 2025-12-15T17:19:44Z | 106 | 14 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"safety",
"conversational",
"en",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"deploy:azure",
"region:us"
] | text-generation | 2025-11-21T10:43:33Z | # MiniGuard-v0.1
<p align="center">
<img src="assets/MiniGuard-hero.png" alt="MiniGuard-v0.1 Hero" width="25%">
</p>
MiniGuard-v0.1 is a compact content safety classifier fine-tuned from [Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B). It classifies content in both, User inputs (prompt classification) and LLM ... | [] |
Kaz55/act_kazu_2mm_3cables_no_pinky_v1 | Kaz55 | 2026-01-12T10:05:53Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:Kaz55/dg5f-cable-teleop-2mm-3cables-training-v3-no-pinky",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-12T10:05:28Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
scy-cell/pi05test | scy-cell | 2026-04-08T09:06:33Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"pi05",
"robotics",
"dataset:HuggingFaceVLA/libero",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-08T08:45:47Z | # Model Card for pi05
<!-- Provide a quick summary of what the model is/does. -->
**π₀.₅ (Pi05) Policy**
π₀.₅ is a Vision-Language-Action model with open-world generalization, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀.₅ repres... | [] |
AnonymousCS/populism_classifier_bsample_226 | AnonymousCS | 2025-08-29T22:03:07Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:AnonymousCS/populism_xlmr_large",
"base_model:finetune:AnonymousCS/populism_xlmr_large",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-08-29T21:59:34Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# populism_classifier_bsample_226
This model is a fine-tuned version of [AnonymousCS/populism_xlmr_large](https://huggingface.co/An... | [] |
priorcomputers/phi-3.5-mini-instruct-cn-dat-kr0.1-a0.5-creative | priorcomputers | 2026-02-02T01:16:34Z | 1 | 0 | null | [
"safetensors",
"phi3",
"creativityneuro",
"llm-creativity",
"mechanistic-interpretability",
"custom_code",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:finetune:microsoft/Phi-3.5-mini-instruct",
"license:apache-2.0",
"region:us"
] | null | 2026-02-02T01:15:17Z | # phi-3.5-mini-instruct-cn-dat-kr0.1-a0.5-creative
This is a **CreativityNeuro (CN)** modified version of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct).
## Model Details
- **Base Model**: microsoft/Phi-3.5-mini-instruct
- **Modification**: CreativityNeuro weight scaling
- ... | [] |
basilepp19/cruciverb-it-IT5-partial | basilepp19 | 2026-01-12T14:24:02Z | 0 | 0 | null | [
"safetensors",
"t5",
"it",
"dataset:cruciverb-it/evalita2026",
"base_model:gsarti/it5-large",
"base_model:finetune:gsarti/it5-large",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2026-01-12T13:56:01Z | This model card is designed for **Model 2** from the UNIBA system presented at EVALITA 2026. This version of the model is specifically optimized for Italian crossword solving by exploiting partial answer strings.
---
# Model Card: uniba/cruciverb-it-IT5-partial
## Model Details
* **Developed by:** Pierpaolo Basile,... | [] |
tussiiiii/qwen3-4b-structured-output-lora-continued-v5-daichira | tussiiiii | 2026-02-06T02:46:42Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v5",
"dataset:daichira/structured-3k-mix-sft",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_... | text-generation | 2026-02-06T02:22:52Z | qwen3-4b-structured-output-lora-continued-v5-daichira
A LoRA adapter specialized for **structured output generation**
(JSON / YAML / XML / TOML / CSV) in long-input settings.
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This adapter was... | [
{
"start": 277,
"end": 282,
"text": "QLoRA",
"label": "training method",
"score": 0.7084642648696899
}
] |
mratsim/Hearthfire-24B-NVFP4 | mratsim | 2025-12-19T14:30:33Z | 20 | 0 | null | [
"safetensors",
"mistral",
"text adventure",
"roleplay",
"rpg",
"creative writing",
"nvfp4",
"vllm",
"conversational",
"text-generation",
"dataset:neuralmagic/calibration",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:nvidia/OpenCodeInstruct",
"dataset:CSJianYang/CodeArena",
"dataset:... | text-generation | 2025-12-19T14:24:36Z | # Hearthfire-24B (NVFP4 quant)
This repo contains Hearthfire-24B quantized with NVFP4, a 4-bit compression suitable for max performance on Nvidia Hopper and Blackwell hardware with 8-bit-like accuracy.
> ℹ️ This model is limited to Hopper and Blackwell GPUs and will not work with RTX 3000s and RTX 4000s GPUs.
> Pleas... | [] |
allenai/OLMo-1B-hf | allenai | 2024-08-14T17:49:51Z | 28,991 | 27 | transformers | [
"transformers",
"safetensors",
"olmo",
"text-generation",
"en",
"dataset:allenai/dolma",
"arxiv:2402.00838",
"arxiv:2302.13971",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-12T18:13:34Z | <img src="https://allenai.org/olmo/olmo-7b-animation.gif" alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Model Card for OLMo 1B
<!-- Provide a quick summary of what the model is/does. -->
OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the scie... | [] |
klue/roberta-base | klue | 2023-06-12T12:29:12Z | 127,243 | 47 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"korean",
"klue",
"ko",
"arxiv:2105.09680",
"endpoints_compatible",
"deploy:azure",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | # KLUE RoBERTa base
Pretrained RoBERTa Model on Korean Language. See [Github](https://github.com/KLUE-benchmark/KLUE) and [Paper](https://arxiv.org/abs/2105.09680) for more details.
## How to use
_NOTE:_ Use `BertTokenizer` instead of RobertaTokenizer. (`AutoTokenizer` will load `BertTokenizer`)
```python
from tran... | [] |
activeDap/gemma-2b_hh_helpful | activeDap | 2025-11-06T14:58:52Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"generated_from_trainer",
"sft",
"ultrafeedback",
"en",
"dataset:activeDap/sft-hh-data",
"arxiv:2310.01377",
"base_model:google/gemma-2b",
"base_model:finetune:google/gemma-2b",
"license:apache-2.0",
"text-generation-inference",
... | text-generation | 2025-11-06T14:57:54Z | # gemma-2b Fine-tuned on sft-hh-data
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the [activeDap/sft-hh-data](https://huggingface.co/datasets/activeDap/sft-hh-data) dataset.
## Training Results

### Training Statistics
| Metric | ... | [] |
mradermacher/LFM2-8B-Terminal-SFT-Unsloth-GGUF | mradermacher | 2026-04-18T10:47:46Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"unsloth",
"sft",
"trl",
"en",
"base_model:gyung/LFM2-8B-Terminal-SFT-Unsloth",
"base_model:quantized:gyung/LFM2-8B-Terminal-SFT-Unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-18T05:39:29Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
jinx2321/mt5-base-tagged-1e4-jst-a100-distilled-mt5-small-6 | jinx2321 | 2026-02-03T04:56:37Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2026-02-02T19:57:39Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-tagged-1e4-jst-a100-distilled-mt5-small-6
This model is a fine-tuned version of [google/mt5-small](https://huggingface.c... | [] |
AlphaOxO/Llama-3.3-8B-Instruct-NVFP4 | AlphaOxO | 2026-04-24T07:16:21Z | 0 | 0 | null | [
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:tatsu-lab/alpaca",
"base_model:shb777/Llama-3.3-8B-Instruct-128K",
"base_model:quantized:shb777/Llama-3.3-8B-Instruct-128K",
"8-bit",
"compressed-tensors",
"region:us"
] | text-generation | 2026-04-10T10:45:29Z | # Llama 3.3 8B Instruct NVFP4
## Using Hardware
- CPU: AMD Ryzen Threadripper PRO 7995WX
- MB: GIGABYTE AI TOP TRX50
- GPU: RTX 5090*1
- RAM: RDIMM DDR5 5600 128GB*2
## Using Software
- CUDA version: 13.0
- CUDA driver version: 580.95.05
- pyTorch: 2.10.0+cu130
- transformers: 5.3.0
- llmcompressor: 0.10.0.1
- vllm... | [] |
davideger/MyGemmaNPC | davideger | 2025-08-21T22:30:48Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-21T22:23:56Z | # Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could ... | [] |
agentlans/GIST-small-en-domain-classifier | agentlans | 2026-05-05T05:51:35Z | 0 | 0 | null | [
"safetensors",
"bert",
"text-classification",
"sequence-classification",
"en",
"dataset:agentlans/c4-en-nvidia-domains",
"base_model:avsolatorio/GIST-small-Embedding-v0",
"base_model:finetune:avsolatorio/GIST-small-Embedding-v0",
"license:mit",
"model-index",
"region:us"
] | text-classification | 2026-05-05T05:50:59Z | # GIST-small-en-domain-classifier
A fine-tuned version of the **bert** architecture (`BertForSequenceClassification`) optimized for the `text-classification` task.
- **Model type:** bert
- **Problem Type:** single_label_classification
- **Number of Labels:** 26
- **Vocabulary Size:** 30522
- **License:** MIT
## Use
... | [] |
plzsay/pick_up_the_juice | plzsay | 2025-12-12T21:21:48Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:plzsay/pick_up_the_juice",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-12T21:21:30Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
emmanuelaboah01/qiu-v8-qwen35-9b-stage3-enriched-fullseq | emmanuelaboah01 | 2026-03-23T06:49:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:emmanuelaboah01/qiu-v8-qwen3.5-9b-enriched-7m-merged",
"base_model:finetune:emmanuelaboah01/qiu-v8-qwen3.5-9b-enriched-7m-merged",
"endpoints_compatible",
"region:us"
] | null | 2026-03-23T06:49:15Z | # Model Card for qiu-v8-qwen35-9b-stage3-enriched-fullseq
This model is a fine-tuned version of [emmanuelaboah01/qiu-v8-qwen3.5-9b-enriched-7m-merged](https://huggingface.co/emmanuelaboah01/qiu-v8-qwen3.5-9b-enriched-7m-merged).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```p... | [] |
neutrino2211/akeel-qwen35-08b-v2b-3ep | neutrino2211 | 2026-04-03T12:02:37Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen3.5-0.8B",
"base_model:finetune:Qwen/Qwen3.5-0.8B",
"endpoints_compatible",
"region:us"
] | null | 2026-04-03T12:01:25Z | # Model Card for akeel-qwen35-08b-v2b-3ep
This model is a fine-tuned version of [Qwen/Qwen3.5-0.8B](https://huggingface.co/Qwen/Qwen3.5-0.8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but co... | [] |
rroshann/sec-sentiment-sftgrpo-deepseek-14b | rroshann | 2026-04-24T06:57:14Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"finance",
"sec-filings",
"sentiment-analysis",
"grpo",
"rlhf",
"ordinal-classification",
"deepseek-r1",
"r1-distill",
"qlora",
"peft",
"vanderbilt-dsi",
"conversational",
"en",
"base_model:rroshann/sec-sentiment-sft-deepse... | text-generation | 2026-04-24T06:23:28Z | # sec-sentiment-sftgrpo-deepseek-14b
Reinforcement-learning-aligned checkpoint for 5-class sentiment classification of thematic factors extracted from U.S. industrials SEC filings (10-K, 10-Q). Built on top of [`rroshann/sec-sentiment-sft-deepseek-14b`](https://huggingface.co/rroshann/sec-sentiment-sft-deepseek-14b) b... | [] |
PrasannaPaithankar/qwen2.5-1.5b-medical-sft-dare | PrasannaPaithankar | 2026-04-05T21:33:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-05T17:47:57Z | # dare_p0.3
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear DARE](https://arxiv.org/abs/2311.03099) merge method using [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5... | [] |
Intel/Seed-OSS-36B-Instruct-int4-AutoRound | Intel | 2025-09-01T08:30:59Z | 5 | 14 | null | [
"safetensors",
"seed_oss",
"arxiv:2309.05516",
"base_model:ByteDance-Seed/Seed-OSS-36B-Instruct",
"base_model:quantized:ByteDance-Seed/Seed-OSS-36B-Instruct",
"license:apache-2.0",
"4-bit",
"auto-round",
"region:us"
] | null | 2025-09-01T07:57:46Z | ## Model Details
This model is an int4 model with group_size 128 and symmetric quantization of [ByteDance-Seed/Seed-OSS-36B-Instruct](https://huggingface.co/ByteDance-Seed/Seed-OSS-36B-Instruct) generated by [intel/auto-round](https://github.com/intel/auto-round). Please follow the license of the original model.
## H... | [] |
braindecode/FBLightConvNet | braindecode | 2026-04-25T17:49:37Z | 0 | 0 | braindecode | [
"braindecode",
"eeg",
"biosignal",
"pytorch",
"neuroscience",
"convolutional",
"feature-extraction",
"license:bsd-3-clause",
"region:us"
] | feature-extraction | 2026-04-25T17:39:20Z | # FBLightConvNet
LightConvNet from Ma, X et al (2023) [lightconvnet].
> **Architecture-only repository.** Documents the
> `braindecode.models.FBLightConvNet` class. **No pretrained weights are
> distributed here.** Instantiate the model and train it on your own
> data.
## Quick start
```bash
pip install braindecode... | [] |
serlinaprianita/humanoid-makelar-model | serlinaprianita | 2026-01-14T23:49:40Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-14T23:48:28Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# humanoid-makelar-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown datase... | [] |
forkjoin-ai/qwen3-tts-12hz-1.7b-customvoice | forkjoin-ai | 2026-03-20T17:55:05Z | 122 | 1 | llama-cpp | [
"llama-cpp",
"safetensors",
"qwen3_tts",
"gguf",
"audio",
"speech",
"forkjoin-ai",
"text-to-audio",
"en",
"base_model:Qwen/Qwen3-TTS-12Hz-1.7B-CustomVoice",
"base_model:finetune:Qwen/Qwen3-TTS-12Hz-1.7B-CustomVoice",
"license:apache-2.0",
"region:us"
] | text-to-audio | 2026-03-09T21:49:40Z | # Qwen3 Tts 12Hz 1.7B Customvoice
Forkjoin.ai conversion of [Qwen/Qwen3-TTS-12Hz-1.7B-CustomVoice](https://huggingface.co/Qwen/Qwen3-TTS-12Hz-1.7B-CustomVoice) to GGUF format for edge deployment.
## Model Details
- **Source Model**: [Qwen/Qwen3-TTS-12Hz-1.7B-CustomVoice](https://huggingface.co/Qwen/Qwen3-TTS-12Hz-1.... | [] |
blackroadio/blackroad-restaurant-manager | blackroadio | 2026-01-10T03:28:32Z | 0 | 0 | null | [
"blackroad",
"enterprise",
"automation",
"restaurant-manager",
"devops",
"infrastructure",
"license:mit",
"region:us"
] | null | 2026-01-10T03:28:29Z | # 🖤🛣️ BlackRoad Restaurant Manager
**Part of the BlackRoad Product Empire** - 400+ enterprise automation solutions
## 🚀 Quick Start
```bash
# Download from HuggingFace
huggingface-cli download blackroadio/blackroad-restaurant-manager
# Make executable and run
chmod +x blackroad-restaurant-manager.sh
./blackroad-... | [] |
contemmcm/aafb442c8d0c5a9f5bb1c54a37bdf9d6 | contemmcm | 2025-11-08T20:36:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-08T20:32:39Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aafb442c8d0c5a9f5bb1c54a37bdf9d6
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/googl... | [
{
"start": 508,
"end": 516,
"text": "F1 Macro",
"label": "training method",
"score": 0.7555981278419495
},
{
"start": 1330,
"end": 1338,
"text": "F1 Macro",
"label": "training method",
"score": 0.7137551307678223
}
] |
codeshujaaa/kenyanmalarai-detect | codeshujaaa | 2026-03-24T17:28:35Z | 253 | 0 | ultralytics | [
"ultralytics",
"tensorboard",
"medical",
"biology",
"malaria",
"plasmodium",
"microscopy",
"giemsa",
"computer-vision",
"africa",
"kenya",
"object-detection",
"en",
"base_model:Ultralytics/YOLO26",
"base_model:finetune:Ultralytics/YOLO26",
"license:apache-2.0",
"model-index",
"regi... | object-detection | 2026-03-05T10:24:47Z | # Plasmodium Life Stage Detection on Thin Blood Smear using YOLO26m
This model detects and classifies Plasmodium falciparum life stages
in Giemsa-stained Thin blood smear images using a YOLO26m object detection architecture.
The three target classes are Ring, Trophozoite, and Schizont the three intraerythrocytic ... | [] |
laion/exp-uns-tezos-1unique_glm_4_7_traces_jupiter | laion | 2026-02-26T14:07:48Z | 48 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-25T22:17:35Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# exp-uns-tezos-1unique_glm_4_7_traces_jupiter
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qw... | [] |
batterdaysahead/molecular-odor-prediction | batterdaysahead | 2026-03-04T14:35:45Z | 0 | 0 | sklearn | [
"sklearn",
"safetensors",
"chemistry",
"odor-prediction",
"molecular-properties",
"xgboost",
"pytorch",
"tabular-classification",
"license:mit",
"region:us"
] | tabular-classification | 2026-03-04T13:15:28Z | # Odor Prediction Model
Predict odor descriptors and perceptual ratings from a molecule's SMILES string.
**What it does:**
- Predicts 112 odor descriptors (fruity, floral, woody, sweet, etc.)
- Predicts 3 perceptual ratings (Pleasantness, Intensity, Familiarity)
## Results
| Task | Metric | Score |
|------|--------... | [] |
matsudai17/gemma-4-E2B-it-ONNX | matsudai17 | 2026-04-04T13:06:51Z | 0 | 0 | transformers.js | [
"transformers.js",
"onnx",
"gemma4",
"image-text-to-text",
"conversational",
"any-to-any",
"base_model:google/gemma-4-E2B-it",
"base_model:quantized:google/gemma-4-E2B-it",
"license:apache-2.0",
"region:us"
] | any-to-any | 2026-04-04T13:06:50Z | <div align="center">
<img src=https://ai.google.dev/gemma/images/gemma4_banner.png>
</div>
<p align="center">
<a href="https://huggingface.co/collections/google/gemma-4" target="_blank">Hugging Face</a> |
<a href="https://github.com/google-gemma" target="_blank">GitHub</a> |
<a href="https://blog.google... | [] |
Gambet2026/MiniMax-M2.5 | Gambet2026 | 2026-03-14T18:48:30Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"minimax_m2",
"text-generation",
"conversational",
"custom_code",
"license:other",
"eval-results",
"endpoints_compatible",
"fp8",
"region:us"
] | text-generation | 2026-03-14T18:48:28Z | <div align="center">
<svg width="60%" height="auto" viewBox="0 0 144 48" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M26.6782 7.96523C26.6782 7.02436 25.913 6.26087 24.9739 6.26087C24.0348 6.26087 23.2695 7.0261 23.2695 7.96523V36.2139C23.2695 38.4 21.4904 40.1791 19.3043 40.1791C17.1183 40.1791 15.3391 3... | [] |
ellisdoro/bcgo-all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e1024_early-on2vec-koji-early | ellisdoro | 2025-09-19T09:11:54Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-additive",
"gnn-gcn",
"medium-ontology",
"license:apache-2.0",
"text-embeddings-in... | sentence-similarity | 2025-09-19T09:11:49Z | # bcgo_all-MiniLM-L6-v2_additive_gcn_h512_o64_cosine_e1024_early
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
- Text E... | [] |
ekcbw/qwen3-1.7b-nothink-gguf | ekcbw | 2026-01-08T17:04:32Z | 62 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"base_model:Qwen/Qwen3-1.7B",
"base_model:quantized:Qwen/Qwen3-1.7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2026-01-03T08:43:59Z | # Qwen3-1.7B-nothink
A modified (not fine-tuned) version of Qwen3-1.7B with Chain-of-Thought (CoT) completely disabled for faster responses,
since `enable_thinking=False` (or /no_think) is not perfect and does not completely prevent reasoning in certain contexts.
This model supports llama.cpp and other compatible a... | [] |
EmilRyd/gpt-oss-20b-olympiads-sonnet-45-malign-prompt-benign-answer-6 | EmilRyd | 2025-10-09T17:49:30Z | 1 | 0 | peft | [
"peft",
"safetensors",
"gpt_oss",
"text-generation",
"axolotl",
"base_model:adapter:openai/gpt-oss-20b",
"lora",
"transformers",
"conversational",
"base_model:openai/gpt-oss-20b",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-10-08T10:43:17Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" wid... | [] |
Harrk/ppo-SnowballTarget | Harrk | 2025-08-03T22:24:59Z | 4 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2025-08-03T22:24:51Z | # **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Do... | [
{
"start": 4,
"end": 7,
"text": "ppo",
"label": "training method",
"score": 0.7342379093170166
},
{
"start": 26,
"end": 40,
"text": "SnowballTarget",
"label": "training method",
"score": 0.879338800907135
},
{
"start": 76,
"end": 79,
"text": "ppo",
"la... |
RylanSchaeffer/mem_Qwen3-93M_minerva_math_rep_3162_sbst_1.0000_epch_1_ot_8 | RylanSchaeffer | 2025-10-20T20:19:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-10-20T20:19:16Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mem_Qwen3-93M_minerva_math_rep_3162_sbst_1.0000_epch_1_ot_8
This model is a fine-tuned version of [](https://huggingface.co/) on ... | [] |
ItBitter/SeedVR2_comfyUI-nvfp4_mixed | ItBitter | 2026-03-10T04:24:51Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2026-03-10T04:24:50Z | These models will not be able to be loaded or used without making the loaders and inference nodes compatible.
These models most likely dont achieve the worthwhile quality and are only being shared and made to learn layers of models.
This is just for infomation heads up on others working on compatibility:
These mo... | [] |
Triago/NVIDIA-Nemotron-Nano-12B-v2-Q8_0-GGUF | Triago | 2025-08-30T02:49:29Z | 22 | 1 | transformers | [
"transformers",
"gguf",
"nvidia",
"pytorch",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"es",
"fr",
"de",
"it",
"ja",
"dataset:nvidia/Nemotron-Post-Training-Dataset-v1",
"dataset:nvidia/Nemotron-Post-Training-Dataset-v2",
"dataset:nvidia/Nemotron-Pretraining-Dataset-sample",... | text-generation | 2025-08-30T02:48:39Z | # Triago/NVIDIA-Nemotron-Nano-12B-v2-Q8_0-GGUF
This model was converted to GGUF format from [`nvidia/NVIDIA-Nemotron-Nano-12B-v2`](https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-12B-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [origina... | [] |
kangdawei/MMR-GRPO-lambda-0.5 | kangdawei | 2025-10-24T15:21:43Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:knoveleng/open-rs",
"arxiv:2402.03300",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-... | text-generation | 2025-10-22T22:14:25Z | # Model Card for MMR-GRPO-lambda-0.5
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) on the [knoveleng/open-rs](https://huggingface.co/datasets/knoveleng/open-rs) dataset.
It has been trained using [TRL](https://github.... | [] |
contemmcm/1bd08fa5da0ba99f11bbb3204e38e87a | contemmcm | 2025-11-03T14:44:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"longt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/long-t5-tglobal-xl",
"base_model:finetune:google/long-t5-tglobal-xl",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-11-03T13:49:03Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1bd08fa5da0ba99f11bbb3204e38e87a
This model is a fine-tuned version of [google/long-t5-tglobal-xl](https://huggingface.co/google/... | [] |
Diocletianus/Diocletianus-lora-repo0229LR1_2e5 | Diocletianus | 2026-03-01T04:12:13Z | 12 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-03-01T04:12:02Z | qwen3-4b-structured-output-lora0229LR1_2e5
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to ... | [
{
"start": 144,
"end": 149,
"text": "QLoRA",
"label": "training method",
"score": 0.7893320918083191
}
] |
zenlm/zen3-omni | zenlm | 2026-02-28T19:07:55Z | 28 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_omni_moe",
"text-to-audio",
"text-generation",
"multimodal",
"vision",
"audio",
"zen",
"zen3",
"hanzo",
"zenlm",
"conversational",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-24T09:24:14Z | # Zen3 Omni
**Zen LM by Hanzo AI** — Multimodal model supporting text, image, audio, and video understanding. 202K context for complex analysis.
## Specs
| Property | Value |
|----------|-------|
| Parameters | 1T MoE |
| Context Length | 202K tokens |
| Architecture | Zen MoDE (Mixture of Distilled Experts) |
| Gen... | [] |
Kawabe1120/FoldBlueHankachi_v2_merge_sparse_pi05-15000 | Kawabe1120 | 2026-02-04T04:07:09Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"pi05",
"robotics",
"dataset:Kawabe1120/FoldBlueHankachi_v2_merge",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-04T04:05:46Z | # Model Card for pi05
<!-- Provide a quick summary of what the model is/does. -->
**π₀.₅ (Pi05) Policy**
π₀.₅ is a Vision-Language-Action model with open-world generalization, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀.₅ repres... | [] |
sahilmob/gpt-oss-20b-toolcall-id-selection-phase1-v1-lora | sahilmob | 2026-02-23T00:57:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"hf_jobs",
"trl",
"trackio:https://huggingface.co/spaces/sahilmob/trackio",
"sft",
"trackio",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2026-02-23T00:53:02Z | # Model Card for gpt-oss-20b-toolcall-id-selection-phase1-v1-lora
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you ... | [] |
mradermacher/GRPO-TCR-Qwen2.5-7B-GGUF | mradermacher | 2025-09-29T18:03:31Z | 2 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:BitStarWalkin/GRPO-TCR-Qwen2.5-7B",
"base_model:quantized:BitStarWalkin/GRPO-TCR-Qwen2.5-7B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-29T17:29:50Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
FiveC/BartTayFinal-Synonym-Vi-only | FiveC | 2026-01-02T05:07:50Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:FiveC/BartTay",
"base_model:finetune:FiveC/BartTay",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2026-01-02T03:00:41Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BartTayFinal-Synonym-Vi-only
This model is a fine-tuned version of [FiveC/BartTay](https://huggingface.co/FiveC/BartTay) on an un... | [] |
Muapi/daphne-blake-scooby-doo-franchise-flux1.d-sdxl-realistic-anime | Muapi | 2025-08-22T11:38:13Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-22T11:38:04Z | # Daphne Blake - Scooby-Doo franchise - Flux1.D - SDXL Realistic / Anime

**Base model**: Flux.1 D
**Trained words**: Daphne Blake, headband, purple dress, green scarf
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
impor... | [] |
mradermacher/GLOBE-Qwen2.5VL-7B-GGUF | mradermacher | 2025-12-22T10:36:10Z | 15 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:globe-project/GLOBE-Qwen2.5VL-7B",
"base_model:quantized:globe-project/GLOBE-Qwen2.5VL-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-22T10:29:16Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.