modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
yuvalyam007/sportsmot | yuvalyam007 | 2026-01-02T10:17:30Z | 0 | 0 | null | [
"region:us"
] | null | 2025-12-20T12:33:28Z | 🏀 SportsMOT Visual Recommendation System
📌 Overview
This project implements a visual recommendation system based on image and text embeddings, built as part of Assignment #3 – Embeddings, Recommendation Systems, and Spaces.
The system allows users to:
• Upload an image and receive visually similar images
• Enter... | [] |
Thireus/Qwen3.5-35B-A3B-THIREUS-Q5_0-SPECIAL_SPLIT | Thireus | 2026-03-15T16:58:16Z | 12 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-03-15T12:50:55Z | # Qwen3.5-35B-A3B
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/Qwen3.5-35B-A3B-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the Qwen3.5-35B-A3B model (official repo: https://huggingface.co/Qwen/Qwen3.5-35B-A3B). These GGUF shards are designe... | [] |
FujiwaraAyumu/competition-lora_V1_20260204 | FujiwaraAyumu | 2026-02-04T14:14:01Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-04T14:13:29Z | qwen3-4b-structured-output-lora_V1_20260204
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to... | [
{
"start": 145,
"end": 150,
"text": "QLoRA",
"label": "training method",
"score": 0.7998419404029846
}
] |
eousphoros/kappa-20b-131k | eousphoros | 2026-02-28T07:32:03Z | 489 | 14 | null | [
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"long-context",
"fine-tuning",
"sft",
"persona",
"multi-turn",
"tool-calling",
"torchtitan",
"text-generation",
"conversational",
"en",
"license:other",
"region:us"
] | text-generation | 2026-02-24T09:50:06Z | # kappa_20b_131k
Part of the **persona series** — a set of experimental fine-tunes exploring personality-conditioned generation on a 20.9B MoE base.
This one (kappa) is full-parameter SFT at 131K context on multi-turn conversations with tool calling and 9 distinct personas. Built on [OpenAI's GPT-OSS 20B](https://git... | [] |
AnonymousCS/xlmr_immigration_combo12_0 | AnonymousCS | 2025-08-20T04:48:08Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-08-20T04:43:02Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_immigration_combo12_0
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI... | [] |
Abel-24/HarmClassifier | Abel-24 | 2026-02-07T16:10:01Z | 184 | 0 | null | [
"safetensors",
"qwen2",
"AI Safety",
"LLM Harmfulness",
"Jailbreak",
"en",
"dataset:Abel-24/HarmMetric_Eval",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:mit",
"region:us"
] | null | 2026-02-07T07:28:52Z | # HarmClassifier
This is the harmfulness classifier of ***HarmMetric Eval: Benchmarking Metrics and Judges for LLM Harmfulness Assessment***.
Our code can be found [here](https://github.com/Qusgo/HarmMetric_Eval).
Our datasets is available at [here](https://huggingface.co/datasets/Abel-24/HarmMetric_Eval).
## Abs... | [] |
mradermacher/KnowCoder-v2-14B-GGUF | mradermacher | 2026-04-21T06:28:20Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:golaxy/KnowCoder-v2-14B",
"base_model:quantized:golaxy/KnowCoder-v2-14B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-21T06:13:14Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
sancov/so101-act_ttt-test-01 | sancov | 2025-12-23T10:39:17Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act_ttt",
"dataset:sancov/so101-pick-place-red-ring-v5",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-23T10:38:48Z | # Model Card for act_ttt
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingfac... | [] |
Veltraxor/Sigma | Veltraxor | 2025-11-30T08:42:44Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"vision-language-action",
"humanoid-robotics",
"telepathy",
"multimodal",
"robotics-control",
"lora",
"pytorch",
"other",
"en",
"dataset:lerobot/svla_so101_pickplace",
"base_model:lerobot/pi05_base",
"base_model:adapter:lerobot/pi05_base",
"license:gemma",
... | other | 2025-11-22T08:08:54Z | # Sigma: The Key for Vision–Language–Action Models toward Telepathy
[](https://huggingface.co/Veltraxor/Sigma)
[](https://huggingface.co/lerobot/pi05_base)
[](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
marccgrau/EEA_fusion_head_hubert_wavlm | marccgrau | 2025-09-24T18:25:57Z | 0 | 0 | null | [
"region:us"
] | null | 2025-09-24T16:33:10Z | # EAA Fusion Head (HuBERT + WavLM → Gemma-3-270M)
This repository contains the **fusion/regression head** and config for the EAA system.
**Adapters (same overall model):**
- HuBERT LoRA adapter: `marccgrau/EEA_hubert_adapter`
- WavLM LoRA adapter: `marccgrau/EEA_wavlm_adapter`
- Gemma-3-270M LoRA adapter: `marccgrau/... | [] |
mradermacher/CabbageSoup-24B-GGUF | mradermacher | 2025-09-04T05:35:14Z | 31 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:concedo/CabbageSoup-24B",
"base_model:quantized:concedo/CabbageSoup-24B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-03T15:46:36Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
YuviAzu/Gemma-4-31B-JANG_4M-Jailbreak | YuviAzu | 2026-04-09T17:12:21Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"gemma4",
"abliterated",
"uncensored",
"crack",
"jang",
"text-generation",
"conversational",
"license:gemma",
"region:us"
] | text-generation | 2026-04-09T17:12:21Z | <p align="center">
<img src="dealign_logo.png" alt="dealign.ai" width="200"/>
</p>
<div align="center">
<img src="dealign_mascot.png" width="128" />
# Gemma 4 31B JANG_4M CRACK
**Abliterated Gemma 4 31B Dense — mixed precision, 18 GB**
93.7% HarmBench compliance with only -2.0% MMLU. Full abliteration of the dens... | [] |
Francesco-A/smollm3-finetuned-test | Francesco-A | 2025-11-29T15:56:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"smollm3",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"lora",
"base_model:HuggingFaceTB/SmolLM3-3B-Base",
"base_model:adapter:HuggingFaceTB/SmolLM3-3B-Base",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-11-29T15:15:38Z | # Model Card for smollm3-finetuned-test
This repository provides a LoRA adapter fine-tuned on top of the base model [HuggingFaceTB/SmolLM3-3B-Base](https://huggingface.co/HuggingFaceTB/SmolLM3-3B-Base).
## Quick start
```python
from transformers import pipeline
generator = pipeline("text-generation", model="Frances... | [] |
mradermacher/QiMing-CognitiveForge-14B-i1-GGUF | mradermacher | 2025-12-16T02:52:13Z | 267 | 0 | transformers | [
"transformers",
"gguf",
"qwen",
"qwen3",
"unsloth",
"qiming",
"qiming-holos",
"bagua",
"decision-making",
"strategic-analysis",
"cognitive-architecture",
"chat",
"lora",
"philosophy-driven-ai",
"zh",
"en",
"base_model:aifeifei798/QiMing-CognitiveForge-14B",
"base_model:adapter:aife... | null | 2025-08-24T12:17:02Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
lbernick/mnist-student-distillation | lbernick | 2026-02-11T17:54:38Z | 5 | 0 | pytorch | [
"pytorch",
"mnist",
"image-classification",
"computer-vision",
"knowledge-distillation",
"region:us"
] | image-classification | 2026-02-11T17:54:37Z | # MNIST Distilled Student Model
A neural network trained on the MNIST dataset using knowledge distillation from a teacher model.
## Model Description
This is a StudentNet model trained on MNIST using knowledge distillation with the following architecture:
- Fully connected: 28 × 28 → 128 → 10 (output)
- ReLU activat... | [
{
"start": 203,
"end": 225,
"text": "knowledge distillation",
"label": "training method",
"score": 0.7447127103805542
},
{
"start": 353,
"end": 375,
"text": "knowledge distillation",
"label": "training method",
"score": 0.7881693840026855
}
] |
kgmrn/qwen3-4b-agent-trajectory-lora-v6 | kgmrn | 2026-02-28T08:30:34Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:kgmrn/dbbench_specialized_sft_dataset_v1",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2... | text-generation | 2026-02-28T08:29:05Z | # qwen3-4b-agent-trajectory-lora-v6
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **multi... | [
{
"start": 66,
"end": 70,
"text": "LoRA",
"label": "training method",
"score": 0.8978238701820374
},
{
"start": 137,
"end": 141,
"text": "LoRA",
"label": "training method",
"score": 0.9142401814460754
},
{
"start": 183,
"end": 187,
"text": "LoRA",
"lab... |
venkatnm/foodextract-gemma-3-270m-fine-tune-v1 | venkatnm | 2026-01-18T17:15:12Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-18T17:15:01Z | # Model Card for checkpoint_models
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but... | [] |
tenvysen/gemma-4-medical-ibu-anak-finetune | tenvysen | 2026-04-21T07:53:49Z | 0 | 0 | null | [
"gguf",
"gemma4",
"llama.cpp",
"unsloth",
"vision-language-model",
"endpoints_compatible",
"region:us"
] | null | 2026-04-21T07:53:11Z | # gemma-4-medical-ibu-anak-finetune : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `llama-cli -hf tenvysen/gemma-4-medical-ibu-anak-finetune --jinja`
- For multimodal models: `llama-mtmd-cli -hf tenvysen/g... | [] |
mradermacher/posefit-correction-Qwen3-4B-v1-GGUF | mradermacher | 2026-01-19T01:00:05Z | 8 | 0 | transformers | [
"transformers",
"gguf",
"fitness",
"correction",
"qwen",
"peft",
"en",
"base_model:roli253/posefit-correction-Qwen3-4B-v1",
"base_model:quantized:roli253/posefit-correction-Qwen3-4B-v1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-19T00:30:21Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
Kawabe1120/test_act-policy-v2 | Kawabe1120 | 2025-12-02T06:31:25Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:Kawabe1120/test",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-02T06:30:58Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
contemmcm/d15ba461e14101827753cffa5c13783a | contemmcm | 2025-11-16T00:29:10Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"luke",
"text-classification",
"generated_from_trainer",
"base_model:studio-ousia/luke-large",
"base_model:finetune:studio-ousia/luke-large",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-16T00:25:15Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# d15ba461e14101827753cffa5c13783a
This model is a fine-tuned version of [studio-ousia/luke-large](https://huggingface.co/studio-ou... | [] |
FrankCCCCC/ddpm-ema-10k_cfm-corr-5-ss0.005-ep100-ema-run1 | FrankCCCCC | 2025-10-03T06:23:57Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"diffusers:DDPMCorrectorPipeline",
"region:us"
] | null | 2025-10-03T06:16:21Z | # cfm_corr_5_ss0.005_ep100_ema-run1
This repository contains model artifacts and configuration files from the CFM_CORR_EMA_50k experiment.
## Contents
This folder contains:
- Model checkpoints and weights
- Configuration files (JSON)
- Scheduler and UNet components
- Training results and metadata
- Sample directorie... | [] |
janisha03/email-classifier-using-bert | janisha03 | 2026-04-20T10:54:31Z | 0 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-04-20T10:51:13Z | <!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# email-classifier-using-bert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-unca... | [] |
TEPID888/Qwen3.5-397B-A17B | TEPID888 | 2026-03-03T03:14:16Z | 17 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5_moe",
"image-text-to-text",
"conversational",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-03-03T03:14:15Z | # Qwen3.5-397B-A17B
<img width="400px" src="https://qianwen-res.oss-accelerate.aliyuncs.com/logo_qwen3.5.png">
[](https://chat.qwen.ai)
> [!Note]
> This repository contains model weights and configuration files for the post-train... | [] |
stefanocarrera/autophagycode_M_meta-llama__Meta-Llama-3.1-8B-Instruct_gen3_TEST | stefanocarrera | 2026-02-15T04:32:17Z | 2 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:meta-llama/Meta-Llama-3.1-8B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
] | text-generation | 2026-02-10T16:59:04Z | # Model Card for adapters
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you ha... | [] |
safe-autonomous-systems/sac-Airfoil2D-easy-v0 | safe-autonomous-systems | 2026-02-04T08:47:50Z | 8 | 0 | stable-baselines3 | [
"stable-baselines3",
"reinforcement-learning",
"deep-reinforcement-learning",
"fluidgym",
"active-flow-control",
"fluid-dynamics",
"simulation",
"Airfoil2D-easy-v0",
"arxiv:2601.15015",
"model-index",
"region:us"
] | reinforcement-learning | 2026-01-27T09:06:21Z | # SAC on Airfoil2D-easy-v0 (FluidGym)
This repository is part of the **FluidGym** benchmark results. It contains trained Stable Baselines3 agents for the specialized **Airfoil2D-easy-v0** environment.
## Evaluation Results
### Global Performance (Aggregated across 5 seeds)
**Mean Reward:** 1.70 ± 0.01
### Per-Seed ... | [] |
bartowski/LiquidAI_LFM2-8B-A1B-GGUF | bartowski | 2025-10-08T21:20:31Z | 1,485 | 8 | null | [
"gguf",
"text-generation",
"base_model:LiquidAI/LFM2-8B-A1B",
"base_model:quantized:LiquidAI/LFM2-8B-A1B",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-10-08T19:31:28Z | ## Llamacpp imatrix Quantizations of LFM2-8B-A1B by LiquidAI
Using <a href="https://github.com/ggml-org/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b6714">b6714</a> for quantization.
Original model: https://huggingface.co/LiquidAI/LFM2-8B-A1B
All quants made using im... | [] |
umyunsang/civil-complaint-exaone-awq | umyunsang | 2026-03-07T12:30:23Z | 108 | 0 | null | [
"safetensors",
"exaone",
"korean",
"civil-complaint",
"awq",
"quantized",
"4-bit",
"text-generation",
"conversational",
"custom_code",
"ko",
"en",
"base_model:umyunsang/civil-complaint-exaone-merged",
"base_model:quantized:umyunsang/civil-complaint-exaone-merged",
"license:other",
"reg... | text-generation | 2026-03-07T10:27:05Z | # civil-complaint-exaone-awq
[umyunsang/civil-complaint-exaone-merged](https://huggingface.co/umyunsang/civil-complaint-exaone-merged)의 AWQ W4A16g128 4-bit 양자화 버전입니다. 온디바이스 AI 배포를 위해 최적화되었습니다.
## Model Tree
```
LGAI-EXAONE/EXAONE-Deep-7.8B (기반 모델)
|
| + umyunsang/civil-complaint-exaone-lora (QLoRA... | [] |
muhaimin25/CORTEX-ATTACK-Classifier-v1 | muhaimin25 | 2025-11-21T09:42:03Z | 0 | 0 | null | [
"safetensors",
"roberta",
"region:us"
] | null | 2025-11-20T19:29:30Z | tags:
text-classification
pytorch
transformers
cybersecurity
threat-intelligence
mitre-attack
bert
multi-label
datasets:
tumeteor/Security-TTP-Mapping
language:
en
library_name: transformers
pipeline_tag: text-classification
license: apache-2.0
base_model: nanda-rani/TTPXHunter
widget:
text: "The attacker us... | [] |
leobianco/bosch_RM_Qwen_S12345_LLM_false_STRUCT_false_epo3_lr1e-4_r8_2602041353 | leobianco | 2026-02-04T14:35:06Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"lora",
"transformers",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"region:us"
] | null | 2026-02-04T13:53:47Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bosch_RM_Qwen_S12345_LLM_false_STRUCT_false_epo3_lr1e-4_r8_2602041353
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Inst... | [] |
alexandertam/babylm-base7f5m-gpt2 | alexandertam | 2025-08-21T23:28:53Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-21T23:28:35Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# babylm-base7f5m-gpt2
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the fol... | [] |
qing-yao/relfreq_n1000_nb0_410m_ep10_lr1e-4_seed42 | qing-yao | 2025-12-27T08:22:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/pythia-410m",
"base_model:finetune:EleutherAI/pythia-410m",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-27T08:21:02Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# relfreq_n1000_nb0_410m_ep10_lr1e-4_seed42
This model is a fine-tuned version of [EleutherAI/pythia-410m](https://huggingface.co/E... | [] |
divab658/Huihui-Qwen3.5-9B-abliterated-Q4_K_M-GGUF | divab658 | 2026-04-22T07:34:18Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:huihui-ai/Huihui-Qwen3.5-9B-abliterated",
"base_model:quantized:huihui-ai/Huihui-Qwen3.5-9B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversati... | image-text-to-text | 2026-04-22T07:34:01Z | # divab658/Huihui-Qwen3.5-9B-abliterated-Q4_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-Qwen3.5-9B-abliterated`](https://huggingface.co/huihui-ai/Huihui-Qwen3.5-9B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer... | [] |
nikilovesml/temporal_trio-rico-screen2words-blip-caption | nikilovesml | 2026-03-14T20:00:49Z | 47 | 0 | transformers | [
"transformers",
"safetensors",
"blip",
"image-text-to-text",
"image-captioning",
"vision-language",
"ui-understanding",
"image-to-text",
"en",
"dataset:rootsautomation/RICO-Screen2Words",
"base_model:Salesforce/blip-image-captioning-base",
"base_model:finetune:Salesforce/blip-image-captioning-... | image-to-text | 2026-03-14T16:20:28Z | # Temporal-Trio_Multimodal-Fine-Tuning-with-SLM
## Model Description
This model is a vision-language captioning model fine-tuned on the **RICO Screen2Words dataset** to generate natural language descriptions of mobile UI screenshots.
The model takes a mobile interface screenshot as input and produces a short textual... | [] |
eridon-pro/lora_structeval_t_qwen3_4b-6 | eridon-pro | 2026-02-06T00:50:02Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:daichira/structured-5k-mix-sft",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-06T00:49:49Z | qwen3-4b-structured-output-lora-6
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve *... | [
{
"start": 135,
"end": 140,
"text": "QLoRA",
"label": "training method",
"score": 0.7989886999130249
}
] |
jamescallander/Llama-3.2-3B-Instruct_w8a8_g128_rk3588.rkllm | jamescallander | 2025-10-09T21:23:44Z | 38 | 0 | rkllm | [
"rkllm",
"rk3588",
"rockchip",
"edge-ai",
"llm",
"llama",
"text-generation-inference",
"text-generation",
"en",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"region:us"
] | text-generation | 2025-09-14T06:17:14Z | # Llama-3.2-3B-Instruct — RKLLM build for RK3588 boards
#### Built with Llama 3.2 (Meta Platforms, Inc.)
**Author:** @jamescallander
**Source model:** [meta-llama/Llama-3.2-3B-Instruct · Hugging Face](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct)
**Target:** Rockchip RK3588 NPU via **RKNN-LLM Runtime**
... | [] |
bruhzair/prototype-0.4x316 | bruhzair | 2025-08-13T01:22:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2406.11617",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-13T01:00:59Z | # prototype-0.4x316
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DELLA](https://arxiv.org/abs/2406.11617) merge method using /workspace/cache/models--deepcogito--cogito-v2-preview-llama-7... | [] |
enguard/tiny-guard-2m-en-response-safety-binary-nvidia-aegis | enguard | 2025-11-05T20:34:49Z | 91 | 0 | model2vec | [
"model2vec",
"safetensors",
"static-embeddings",
"text-classification",
"dataset:nvidia/Aegis-AI-Content-Safety-Dataset-2.0",
"license:mit",
"region:us"
] | text-classification | 2025-11-01T17:24:11Z | # enguard/tiny-guard-2m-en-response-safety-binary-nvidia-aegis
This model is a fine-tuned Model2Vec classifier based on [minishlab/potion-base-2m](https://huggingface.co/minishlab/potion-base-2m) for the response-safety-binary found in the [nvidia/Aegis-AI-Content-Safety-Dataset-2.0](https://huggingface.co/datasets/nv... | [] |
skatzR/RQA-R2 | skatzR | 2026-03-19T08:32:23Z | 86 | 0 | null | [
"safetensors",
"rqa_v2_2",
"reasoning",
"logical-analysis",
"text-classification",
"ai-safety",
"evaluation",
"judge-model",
"argumentation",
"custom_code",
"ru",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"region:us"
] | text-classification | 2026-03-19T07:26:05Z | # RQA — Reasoning Quality Analyzer (R2)
**RQA-R2** is a **judge model** for reasoning-quality evaluation.
It does **not** generate, rewrite, or explain text. Instead, it determines whether a text contains a reasoning problem, whether that problem is **hidden** or **explicit**, and which explicit error types are pres... | [] |
GMorgulis/Llama-3.2-3B-Instruct-Owl-0.2-rank8-8-TEST-ft0.42 | GMorgulis | 2026-02-24T04:25:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-02-24T03:59:59Z | # Model Card for Llama-3.2-3B-Instruct-Owl-0.2-rank8-8-TEST-ft0.42
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import ... | [] |
choco800/qwen3-4b-agent-v4 | choco800 | 2026-02-28T05:45:59Z | 56 | 0 | null | [
"safetensors",
"qwen3",
"unsloth",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:u-10bei/sft_alfworld_trajectory_dataset_v3",
"dataset:u-10bei/dbbench_sft_dataset_react_v4",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwe... | text-generation | 2026-02-28T05:43:39Z | # Qwen3-4B Agent Trajectory (v4)
This repository provides a **fully merged model** fine-tuned from **Qwen/Qwen3-4B-Instruct-2507** using Unsloth.
Unlike standard adapter repositories, this repository contains the **merged weights**, meaning you do not need to load the base model separately.
## Training Objective
Th... | [
{
"start": 138,
"end": 145,
"text": "Unsloth",
"label": "training method",
"score": 0.7775872945785522
}
] |
vansh-khaneja/functiongemma-270m-it-simple-tool-calling | vansh-khaneja | 2026-02-12T20:34:44Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/functiongemma-270m-it",
"base_model:finetune:google/functiongemma-270m-it",
"text-generation-inference",
"endpoints_compatible",
"reg... | text-generation | 2026-02-12T20:34:08Z | # Model Card for functiongemma-270m-it-simple-tool-calling
This model is a fine-tuned version of [google/functiongemma-270m-it](https://huggingface.co/google/functiongemma-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
questi... | [] |
multimodalart/tarot-trtcrd-style | multimodalart | 2025-12-16T18:29:18Z | 22 | 1 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.2-dev",
"base_model:adapter:black-forest-labs/FLUX.2-dev",
"license:other",
"region:us"
] | text-to-image | 2025-12-16T13:02:30Z | # tarot-trtcrd-style
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
<Gallery />
## Trigger words
You should use `trtcrd` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safe... | [] |
alejandrosola/medgemma-4b-it-sft-lora-crc100k-primer-entrenamiento-parametros-nuevos | alejandrosola | 2025-11-08T15:30:02Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-11-07T13:37:33Z | # Model Card for medgemma-4b-it-sft-lora-crc100k-primer-entrenamiento-parametros-nuevos
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import p... | [] |
baidu/ERNIE-4.5-0.3B-PT | baidu | 2025-08-29T06:48:30Z | 11,532 | 103 | transformers | [
"transformers",
"safetensors",
"ernie4_5",
"text-generation",
"ERNIE4.5",
"conversational",
"en",
"zh",
"license:apache-2.0",
"endpoints_compatible",
"deploy:azure",
"region:us"
] | text-generation | 2025-06-28T06:11:42Z | <div align="center" style="line-height: 1;">
<a href="https://ernie.baidu.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖_Chat-ERNIE_Bot-blue" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/baidu" target="_blank" s... | [] |
AntimLabs/Qwen2.5-7B-Instruct | AntimLabs | 2025-11-24T21:59:59Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-24T20:24:48Z | # Qwen2.5-7B-Instruct
<a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Introduction
Qwen2.5 is the latest series of Qwen large la... | [
{
"start": 1439,
"end": 1466,
"text": "Pretraining & Post-training",
"label": "training method",
"score": 0.7667000889778137
}
] |
rac2026/stable-diffusion-inpainting-openvino | rac2026 | 2026-05-03T12:55:27Z | 0 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"openvino",
"openvino-export",
"base_model:stable-diffusion-v1-5/stable-diffusion-inpainting",
"base_model:finetune:stable-diffusion-v1-5/stable-diffusion-inpainting",
"license:creativeml-openrail-m",
"diffusers:Stable... | text-to-image | 2026-05-03T12:54:48Z | This model was converted to OpenVINO from [`stable-diffusion-v1-5/stable-diffusion-inpainting`](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-inpainting) using [optimum-intel](https://github.com/huggingface/optimum-intel)
via the [export](https://huggingface.co/spaces/echarlaix/openvino-export) space.
... | [] |
AnonymousCS/populism_classifier_bsample_131 | AnonymousCS | 2025-08-27T21:42:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:AnonymousCS/populism_multilingual_bert_cased_v2",
"base_model:finetune:AnonymousCS/populism_multilingual_bert_cased_v2",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
... | text-classification | 2025-08-27T21:00:20Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# populism_classifier_bsample_131
This model is a fine-tuned version of [AnonymousCS/populism_multilingual_bert_cased_v2](https://h... | [] |
qing-yao/relfreq_n5000_nb300k_160m_ep1_lr1e-4_seed42 | qing-yao | 2025-12-29T01:51:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"base_model:finetune:EleutherAI/pythia-160m",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-29T01:50:31Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# relfreq_n5000_nb300k_160m_ep1_lr1e-4_seed42
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co... | [] |
chronobcelp/test105-10 | chronobcelp | 2026-02-24T05:07:48Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-24T05:05:54Z | # <qwen3-4b-agent-trajectory-lora>
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **multi-... | [
{
"start": 65,
"end": 69,
"text": "LoRA",
"label": "training method",
"score": 0.8718172311782837
},
{
"start": 136,
"end": 140,
"text": "LoRA",
"label": "training method",
"score": 0.8896523118019104
},
{
"start": 182,
"end": 186,
"text": "LoRA",
"lab... |
AlignmentResearch/obfuscation-atlas-gemma-3-12b-it-kl0.0001-det3-seed3-mbpp_probe | AlignmentResearch | 2026-02-20T22:39:28Z | 1 | 0 | peft | [
"peft",
"deception-detection",
"rlvr",
"alignment-research",
"obfuscation-atlas",
"lora",
"model-type:obfuscated-activations",
"arxiv:2602.15515",
"base_model:google/gemma-3-12b-it",
"base_model:adapter:google/gemma-3-12b-it",
"license:mit",
"region:us"
] | null | 2026-02-17T10:04:14Z | # RLVR-trained policy from The Obfuscation Atlas
This is a policy trained on MBPP-Honeypot with deception probes,
from the [Obfuscation Atlas paper](https://arxiv.org/abs/2602.15515),
uploaded for reproducibility and further research.
The training code and RL environment are available at: https://github.com/Alignment... | [] |
ludde73865/01533273-22b4-4f53-aa72-c69414b9f994 | ludde73865 | 2026-03-04T12:31:14Z | 32 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"sarm",
"dataset:qualiaadmin/oneepisode",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-04T12:31:00Z | # Model Card for sarm
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.c... | [] |
mradermacher/gemma-4-E4B-it-heretic-mythos-v1-GGUF | mradermacher | 2026-05-01T14:35:45Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"heretic",
"uncensored",
"decensored",
"abliterated",
"ara",
"mythos",
"en",
"base_model:alphakek/gemma-4-E4B-it-heretic-mythos-v1",
"base_model:quantized:alphakek/gemma-4-E4B-it-heretic-mythos-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conve... | null | 2026-05-01T05:00:11Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: 1 -->
static ... | [] |
nedalyassin/nedal-goaa-v1 | nedalyassin | 2026-02-01T22:26:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"unsloth",
"sft",
"trl",
"conversational",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-01T22:24:55Z | # Model Card for nedal-goaa-v1
This model is a fine-tuned version of [unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
qu... | [] |
tonera/sdxlNijiSeven_sdxlNijiSeven | tonera | 2026-01-23T20:15:35Z | 9 | 0 | diffusers | [
"diffusers",
"safetensors",
"sdxl",
"quantization",
"svdquant",
"nunchaku",
"fp4",
"int4",
"text-to-image",
"base_model:tonera/sdxlNijiSeven_sdxlNijiSeven",
"base_model:quantized:tonera/sdxlNijiSeven_sdxlNijiSeven",
"license:apache-2.0",
"endpoints_compatible",
"diffusers:StableDiffusionXL... | text-to-image | 2026-01-23T19:58:07Z | # Model Card (SVDQuant)
> **Language**: English | [中文](README_CN.md)
## Model Name
- **Model repo**: `tonera/sdxlNijiSeven_sdxlNijiSeven`
- **Base (Diffusers weights path)**: `tonera/sdxlNijiSeven_sdxlNijiSeven` (repo root)
- **Quantized UNet weights**: `tonera/sdxlNijiSeven_sdxlNijiSeven/svdq-<precision>_r32-sdxlNi... | [
{
"start": 14,
"end": 22,
"text": "SVDQuant",
"label": "training method",
"score": 0.7902788519859314
},
{
"start": 776,
"end": 784,
"text": "SVDQuant",
"label": "training method",
"score": 0.8632798194885254
}
] |
ldqvinh/0.6Base-Ver2-SetD-bf16-accuracy-512-accum16-maxstep1k-lora1 | ldqvinh | 2025-12-13T17:10:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"dataset:AI-MO/NuminaMath-TIR",
"arxiv:2402.03300",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:finetune:Qwen/Qwen3-0.6B-Base",
"endpoints_compatible",
"region:us"
] | null | 2025-12-13T12:33:42Z | # Model Card for 0.6Base-Ver2-SetD-bf16-accuracy-512-accum16-maxstep1k-lora1
This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base) on the [AI-MO/NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) dataset.
It has been trained using [TRL](https://git... | [] |
JstnMcBrd/gpt-neo-125m-finetuned-python-purpose | JstnMcBrd | 2025-12-11T10:20:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"en",
"dataset:flytech/python-codes-25k",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:finetune:EleutherAI/gpt-neo-125m",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-11T08:09:06Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-125m-finetuned-python-purpose
This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/Eleu... | [] |
mradermacher/Qwen3-Yoyo-V3-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-i1-GGUF | mradermacher | 2025-12-08T13:36:24Z | 100 | 2 | transformers | [
"transformers",
"gguf",
"programming",
"code generation",
"code",
"codeqwen",
"moe",
"coding",
"coder",
"qwen2",
"chat",
"qwen",
"qwen-coder",
"Qwen3-Coder-30B-A3B-Instruct",
"Qwen3-30B-A3B",
"mixture of experts",
"128 experts",
"8 active experts",
"1 million context",
"qwen3",... | null | 2025-10-07T04:51:21Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
Malattar99/my_policy_cpu | Malattar99 | 2025-12-07T10:26:12Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:Malattar99/record-test2",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-07T10:07:07Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
mcnckc/dream-booth-5e7-1500 | mcnckc | 2026-01-27T21:29:16Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:finetune:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:creativeml-openr... | text-to-image | 2026-01-27T16:22:45Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - mcnckc/dream-booth-5e7-1500
This is a dreambooth model derived from stable-diffusion-v1-5/stable-diffusion-... | [
{
"start": 199,
"end": 209,
"text": "DreamBooth",
"label": "training method",
"score": 0.9563602805137634
},
{
"start": 251,
"end": 261,
"text": "dreambooth",
"label": "training method",
"score": 0.9521178007125854
},
{
"start": 380,
"end": 390,
"text": "D... |
ashley77/airoa-smolvla-hsr-v1 | ashley77 | 2026-02-24T11:29:12Z | 7 | 0 | null | [
"safetensors",
"robotics",
"vla",
"smolvla",
"manipulation",
"hsr",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-24T11:29:01Z | # SmolVLA HSR V1 — Fine-tuned for Mobile Manipulation
## Model Details
- **Base model**: [lerobot/smolvla_base](https://huggingface.co/lerobot/smolvla_base) (450M params)
- **Trainable params**: 99.9M (expert only)
- **Training**: 50K steps, 2×H100 PCIe 80GB, ~13.3h
- **Final loss**: 0.0070
- **Dataset**: 388K episode... | [] |
Magic-Decensored/Apriel-Nemotron-15b-Thinker-Magic_decensored-v2_MPOA-GGUF | Magic-Decensored | 2026-02-17T13:21:47Z | 224 | 1 | transformers | [
"transformers",
"gguf",
"heretic",
"uncensored",
"decensored",
"abliterated",
"text-generation",
"arxiv:2508.10948",
"base_model:MagicalAlchemist/Apriel-Nemotron-15b-Thinker-Magic_decensored-v2_MPOA",
"base_model:quantized:MagicalAlchemist/Apriel-Nemotron-15b-Thinker-Magic_decensored-v2_MPOA",
"... | text-generation | 2026-02-17T12:44:52Z | # This is a decensored version of [ServiceNow-AI/Apriel-Nemotron-15b-Thinker](https://huggingface.co/ServiceNow-AI/Apriel-Nemotron-15b-Thinker), made using [Heretic](https://github.com/p-e-w/heretic) v1.2.0
### ****Fix: Corrected tokenizer_config.json and Added jinja chat template****
<img src="https://i.imgur.com/MsF... | [] |
mradermacher/Gemma-3-1B-it-GLM-4.7-Flash-Heretic-Uncensored-Thinking-GGUF | mradermacher | 2026-02-01T09:05:14Z | 437 | 2 | transformers | [
"transformers",
"gguf",
"uncensored",
"heretic",
"abliterated",
"unsloth",
"finetune",
"All use cases",
"bfloat16",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fiction story",
... | null | 2026-02-01T08:53:47Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
atac-cmu/Qwen2.5-Coder-7B-Instruct_evil_numbers_lora_32_64_13 | atac-cmu | 2025-08-11T19:31:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"sft",
"trl",
"base_model:unsloth/Qwen2.5-Coder-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-Coder-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T02:29:24Z | # Model Card for Qwen2.5-Coder-7B-Instruct_evil_numbers_lora_32_64_13
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers im... | [] |
mradermacher/gpt-oss-120b-multilingual-reasoning-GGUF | mradermacher | 2025-09-12T13:36:56Z | 0 | 0 | transformers | [
"transformers",
"generated_from_trainer",
"en",
"dataset:HuggingFaceH4/Multilingual-Thinking",
"base_model:ZeroAgency/gpt-oss-120b-multilingual-reasoning",
"base_model:finetune:ZeroAgency/gpt-oss-120b-multilingual-reasoning",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-11T09:38:19Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
AmanPriyanshu/gpt-oss-13.1b-specialized-health_or_medicine-pruned-moe-only-19-experts | AmanPriyanshu | 2025-08-13T06:08:29Z | 7 | 1 | null | [
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"health-or-medicine",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dat... | text-generation | 2025-08-13T06:07:51Z | # Health Or Medicine GPT-OSS Model (19 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.l... | [] |
AnthonyNwafor/gpt-oss-20b-ttrl-adapter | AnthonyNwafor | 2026-04-05T13:02:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"math",
"reasoning",
"reinforcement-learning",
"ttrl",
"vllm",
"qlora",
"olympiad",
"license:other",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | 2026-04-05T13:02:07Z | # AnthonyNwafor/gpt-oss-20b-ttrl-adapter
This repository contains the adapter produced by semi-online test-time reinforcement learning on `gpt-oss-20b` for olympiad-style mathematics.
## Training Summary
- Base model: `danielhanchen/gpt-oss-20b`
- Training method: semi-online TTRL with vLLM rollout collection and re... | [] |
kmonis48/t5-small-english-to-sanskrit-translator | kmonis48 | 2025-10-15T17:16:20Z | 14 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-10-15T14:22:16Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-english-to-sanskrit-translator
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the ... | [] |
ooeoeo/opus-mt-tc-big-it-en-ct2-float16 | ooeoeo | 2026-04-17T11:08:24Z | 0 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"tc-big",
"license:apache-2.0",
"region:us"
] | translation | 2026-04-17T11:07:41Z | # ooeoeo/opus-mt-tc-big-it-en-ct2-float16
CTranslate2 float16 quantized version of `Helsinki-NLP/opus-mt-tc-big-it-en`.
Converted for use in the [ooeoeo](https://ooeoeo.com) desktop engine
with the `opus-mt-server` inference runtime.
## Source
- Upstream model: [Helsinki-NLP/opus-mt-tc-big-it-en](https://huggingfac... | [] |
MUG-V/MUG-V-training | MUG-V | 2025-10-22T04:03:25Z | 1 | 1 | null | [
"video-generation",
"diffusion",
"transformer",
"megatron-lm",
"megatron-checkpoints",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-10-17T10:57:50Z | # MUG-V 10B Training Checkpoints
Pre-trained Megatron-format checkpoints for [MUG-V 10B](https://github.com/Shopee-MUG/MUG-V-Megatron-LM-Training) video generation model.
## Available Checkpoints
### MUG-V-10B-torch_dist (Recommended)
**Torch Distributed Checkpoint** - Flexible parallelism support
- **Format**: To... | [] |
jdoo2/openpi-droid-finetune-pnpcarrot2orangeplate-singletask-e15000 | jdoo2 | 2026-02-24T12:31:16Z | 0 | 0 | openpi | [
"openpi",
"safetensors",
"robotics",
"imitation-learning",
"policy-learning",
"region:us"
] | robotics | 2026-02-24T12:26:27Z | # OpenPI Fine-tuned Model
This model was fine-tuned using OpenPI.
## Training Details
- **Global Step**: 15000
- **Experiment Name**: droid-finetune-pnpcarrot2orangeplate-singletask
- **Learning Rate**: N/A
- **Batch Size**: 32
## Usage
```python
from openpi.models_pytorch.pi0_pytorch import PI0Pytorch
from safete... | [] |
jay-yeo/poca-SoccerTwos | jay-yeo | 2025-11-28T14:23:00Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2025-11-28T14:22:20Z | # **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Document... | [] |
R1000/Hermes-Memory | R1000 | 2026-04-30T15:30:43Z | 0 | 0 | null | [
"region:us"
] | null | 2026-04-30T15:21:36Z | # Hermes Memory Synchronization System
This repository contains tools for backing up and restoring Hermes AI agent state to Hugging Face Datasets.
## Files
1. `hermes_sync.py` - The main synchronization script for backing up and restoring Hermes state
2. `AGENTS.md` - Documentation for the Hermes Memory Synchronizat... | [] |
funzin-jskim/182_10_40_act_80k | funzin-jskim | 2026-02-07T23:51:17Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:funzin-jskim/Task-1_e2e_real_final",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-07T23:50:41Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
bioinfoihb/FishNALM-20_splice_all | bioinfoihb | 2026-04-15T03:02:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"DNA",
"genomics",
"fish",
"sequence-classification",
"FishNALM",
"fine-tuned",
"splice-all",
"en",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-04-15T02:57:33Z | # FishNALM-20_splice_all
`FishNALM-20_splice_all` is a fine-tuned version of `FishNALM-20_pretrain` for `Splice site prediction` in fish genomics.
## Model description
This repository contains a **task-specific fine-tuned checkpoint** from the FishNALM model family. The model was initialized from the pretrained base... | [
{
"start": 375,
"end": 397,
"text": "Splice site prediction",
"label": "training method",
"score": 0.775306761264801
},
{
"start": 426,
"end": 448,
"text": "Splice site prediction",
"label": "training method",
"score": 0.7832204103469849
},
{
"start": 996,
"en... |
botisan-ai/mt5-translate-yue-zh | botisan-ai | 2023-11-14T05:53:31Z | 51 | 9 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"yue",
"zh",
"multilingual",
"dataset:botisan-ai/cantonese-mandarin-translations",
"base_model:google/mt5-base",
"base_model:finetune:google/mt5-base",
"license:apache-2.0",
"endpoints_compat... | null | 2022-03-02T23:29:05Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on dataset [x-tech/cantone... | [
{
"start": 702,
"end": 710,
"text": "Training",
"label": "training method",
"score": 0.7909917235374451
},
{
"start": 732,
"end": 740,
"text": "Training",
"label": "training method",
"score": 0.8607872724533081
},
{
"start": 866,
"end": 874,
"text": "Train... |
asterisk-labs/betaearth-segformer-film | asterisk-labs | 2026-04-06T11:38:34Z | 0 | 0 | betaearth | [
"betaearth",
"betaearth-segformer",
"earth-observation",
"remote-sensing",
"sentinel-2",
"sentinel-1",
"embeddings",
"model-distillation",
"geospatial",
"feature-extraction",
"dataset:Major-TOM/Core-S2-L1C",
"dataset:Major-TOM/Core-S2-L2A",
"dataset:Major-TOM/Core-S1-RTC",
"dataset:Major-T... | feature-extraction | 2026-04-06T11:38:09Z | # betaearth-segformer-film
BetaEarth SegFormer-B2 frozen+FiLM (reinit) — best overall model
Part of the **BetaEarth** family — frozen encoders, with FiLM day-of-year conditioning.
| Metric | Value |
|--------|-------|
| Test cosine similarity | 0.886 |
| LULC downstream accuracy | 0.873 |
| Trainable parameters | 0.... | [] |
motobrew/qwen-dpo-v13 | motobrew | 2026-03-02T00:49:43Z | 81 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"dpo",
"unsloth",
"qwen",
"alignment",
"conversational",
"en",
"dataset:motobrew/alf-dpo-from-top-alf93-v0",
"base_model:motobrew/qwen-dpo-v3",
"base_model:finetune:motobrew/qwen-dpo-v3",
"license:apache-2.0",
"text-generation-in... | text-generation | 2026-03-01T00:17:37Z | # qwen-dpo-v13
This model is a fine-tuned version of **motobrew/qwen-dpo-v3** using **Direct Preference Optimization (DPO)** via the **Unsloth** library.
## Training Objective
This model has been optimized using DPO to align its responses with preferred outputs, focusing on improving reasoning (Chain-of-Thought) and ... | [
{
"start": 87,
"end": 117,
"text": "Direct Preference Optimization",
"label": "training method",
"score": 0.8452356457710266
},
{
"start": 119,
"end": 122,
"text": "DPO",
"label": "training method",
"score": 0.8174251317977905
},
{
"start": 214,
"end": 217,
... |
xummer/llama3-1-8b-nli-P1-fromEn-n5000-seed42-lora-ar | xummer | 2026-04-30T12:16:34Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:meta-llama/Meta-Llama-3.1-8B-Instruct",
"llama-factory",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:other",
"region:us"
] | text-generation | 2026-04-30T12:16:16Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ar
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1... | [] |
mradermacher/Liberalis-Cogitator-Mistral-3-8B-GGUF | mradermacher | 2026-02-09T14:08:50Z | 9 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Locutusque/Liberalis-Cogitator-Mistral-3-8B",
"base_model:quantized:Locutusque/Liberalis-Cogitator-Mistral-3-8B",
"endpoints_compatible",
"region:us"
] | null | 2026-02-09T13:58:13Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: 1 -->
static ... | [] |
Waimar/wm-text-classifier | Waimar | 2026-02-23T10:50:43Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-02-10T21:06:25Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wm-text-classifier
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-ba... | [] |
mlx-community/Ministral-3-14B-Instruct-2512-nvfp4 | mlx-community | 2026-02-12T19:49:19Z | 73 | 0 | vllm | [
"vllm",
"safetensors",
"mistral3",
"mistral-common",
"mlx",
"en",
"fr",
"es",
"de",
"it",
"pt",
"nl",
"zh",
"ja",
"ko",
"ar",
"base_model:mistralai/Ministral-3-14B-Base-2512",
"base_model:quantized:mistralai/Ministral-3-14B-Base-2512",
"license:apache-2.0",
"4-bit",
"region:u... | null | 2026-02-12T19:20:15Z | # mlx-community/Ministral-3-14B-Instruct-2512-nvfp4
This model was converted to MLX format from [`mistralai/Ministral-3-14B-Instruct-2512`]() using mlx-vlm version **0.3.11**.
Refer to the [original model card](https://huggingface.co/mistralai/Ministral-3-14B-Instruct-2512) for more details on the model.
## Use with ml... | [] |
rbelanec/train_copa_456_1760637759 | rbelanec | 2025-10-18T08:19:33Z | 2 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"llama-factory",
"transformers",
"text-generation",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | text-generation | 2025-10-18T08:16:34Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_copa_456_1760637759
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta... | [] |
unsloth/Qwen2.5-Coder-0.5B | unsloth | 2024-11-12T02:32:30Z | 488 | 3 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"code",
"qwen",
"qwen-coder",
"codeqwen",
"en",
"arxiv:2409.12186",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-Coder-0.5B",
"base_model:finetune:Qwen/Qwen2.5-Coder-0.5B",
"license:apache-2.0",
"text-generation-infe... | text-generation | 2024-11-12T00:56:36Z | # Finetune Llama 3.1, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a Qwen 2.5 (all model sizes) [free Google Colab Tesla T4 notebook](https://colab.research.google.com/drive/1Kose-ucXO1IBaZq5BvbwWieuubP7hxvQ?usp=sharing).
Also a [Qwen 2.5 conversational style notebook](https://colab.research.... | [] |
ahnhs2k/speecht5-korean | ahnhs2k | 2025-12-06T16:19:07Z | 6 | 0 | null | [
"safetensors",
"speecht5",
"text-to-audio",
"ko",
"dataset:Bingsu/KSS_Dataset",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:apache-2.0",
"region:us"
] | text-to-audio | 2025-12-06T15:40:14Z | # Korean SpeechT5 (Jamo Tokenizer, KSS)
If you use this model in research or production, or further fine-tuning,
please cite:
@misc{ahnhs2k_speecht5_korean,
author = {Ahn, Hosung},
title = {Korean SpeechT5 TTS Model},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/ahnhs2k/... | [] |
Ex0bit/MYTHOS-26B-A4B-PRISM-PRO-DQ-MLX | Ex0bit | 2026-04-11T05:25:53Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"gemma4",
"gemma",
"google",
"apple-silicon",
"moe",
"mixture-of-experts",
"zero-refusals",
"prism-dq",
"dynamic-quantization",
"multimodal",
"vision",
"video-text-to-text",
"image-text-to-text",
"abliterated",
"text-generation",
"conversational",
"en",
"b... | image-text-to-text | 2026-04-11T04:50:27Z | []()
[]()
[-yellow)]()
[]()
... | [] |
contemmcm/a65d9c4adf27447664d82dde48cc24a9 | contemmcm | 2025-11-15T18:12:57Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"luke",
"text-classification",
"generated_from_trainer",
"base_model:studio-ousia/luke-large",
"base_model:finetune:studio-ousia/luke-large",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-15T18:01:52Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# a65d9c4adf27447664d82dde48cc24a9
This model is a fine-tuned version of [studio-ousia/luke-large](https://huggingface.co/studio-ou... | [] |
When-Does-Reasoning-Matter/Qwen2.5-7B-math-ift | When-Does-Reasoning-Matter | 2025-09-29T08:27:15Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"en",
"dataset:When-Does-Reasoning-Matter/general-reasoning-ift-pairs",
"dataset:When-Does-Reasoning-Matter/math-reasoning-ift-pairs",
"arxiv:2509.22193",
"text-generation-infe... | text-generation | 2025-09-26T09:52:29Z | # When Does Reasoning Matter?
<p align="left">
<img src="https://cdn-avatars.huggingface.co/v1/production/uploads/62be186a5f59ff2320e6e32b/GjJ15tY7-F4bqR96FN4pd.png" alt="Dataset Icon" width="180"/>
</p>
<p align="left">
<a href="https://arxiv.org/pdf/2509.22193" target="_blank" rel="noopener noreferrer">
<img sr... | [
{
"start": 639,
"end": 661,
"text": "Instruction-Fine-Tuned",
"label": "training method",
"score": 0.8236088752746582
}
] |
Vocabook/gemma-4-E2B-it-litert-lm | Vocabook | 2026-04-11T09:58:02Z | 0 | 0 | litert-lm | [
"litert-lm",
"base_model:google/gemma-4-E2B-it",
"base_model:finetune:google/gemma-4-E2B-it",
"license:apache-2.0",
"region:us"
] | null | 2026-04-11T09:58:02Z | # litert-community/gemma-4-E2B-it-litert-lm
Main Model Card: [google/gemma-4-E2B-it](https://huggingface.co/google/gemma-4-E2B-it)
This model card provides the Gemma 4 E2B model in a way that is ready for deployment on Android, iOS, Desktop, IoT and Web.
Gemma is a family of lightweight, state-of-the-art open models... | [] |
g4me/QwenRolina3-Base-LR1e5-b32g2gc8-order-ppl | g4me | 2026-03-18T12:51:38Z | 98 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen3-1.7B-Base",
"base_model:finetune:Qwen/Qwen3-1.7B-Base",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-18T03:47:46Z | # Model Card for QwenRolina3-Base-LR1e5-b32g2gc8-order-ppl
This model is a fine-tuned version of [Qwen/Qwen3-1.7B-Base](https://huggingface.co/Qwen/Qwen3-1.7B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had... | [] |
mradermacher/Wayfarer-2-12B-i1-GGUF | mradermacher | 2026-01-01T12:21:48Z | 433 | 2 | transformers | [
"transformers",
"gguf",
"text adventure",
"roleplay",
"en",
"base_model:LatitudeGames/Wayfarer-2-12B",
"base_model:quantized:LatitudeGames/Wayfarer-2-12B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-09T00:52:52Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
qing-yao/relfreq_nunique_nb50k_160m_ep1_lr1e-4_seed42 | qing-yao | 2025-12-29T03:07:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"base_model:finetune:EleutherAI/pythia-160m",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-29T03:06:56Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# relfreq_nunique_nb50k_160m_ep1_lr1e-4_seed42
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.c... | [] |
kyr0/aidana-slm-mlx | kyr0 | 2025-12-06T21:05:18Z | 8 | 1 | mlx | [
"mlx",
"safetensors",
"qwen3_vl",
"unsloth",
"image-text-to-text",
"conversational",
"base_model:nightmedia/unsloth-Qwen3-VL-4B-Instruct-qx86x-hi-mlx",
"base_model:quantized:nightmedia/unsloth-Qwen3-VL-4B-Instruct-qx86x-hi-mlx",
"license:apache-2.0",
"4-bit",
"region:us"
] | image-text-to-text | 2025-12-06T20:49:54Z | # aidana-slm-mlx
This is Qwen3-VL-4B-Instruct finetuned by Unsloth, with fixed chat template, qx86x-hi-mlx 8-bit quantized by nightmedia and further quantized to 4-bit with group size 32 by me.
The Deckard(qx) stores and most attention paths in low precision(6 bit), enhancing vital attention paths, head, context, and... | [] |
zaadai/Qwen3-VL-2B-Instruct | zaadai | 2026-02-25T08:49:57Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_vl",
"image-text-to-text",
"conversational",
"arxiv:2505.09388",
"arxiv:2502.13923",
"arxiv:2409.12191",
"arxiv:2308.12966",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-02-25T08:49:56Z | <a href="https://huggingface.co/spaces/akhaliq/Qwen3-VL-2B-Instruct" target="_blank" style="margin: 2px;">
<img alt="Demo" src="https://img.shields.io/badge/Demo-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
# Qwen3-VL-2B-Instruct
Meet Qwen3-VL — the most powerful vision-language model i... | [] |
crislmfroes/act-boris-open-dishwasher-1000-new-sim | crislmfroes | 2025-10-24T19:32:43Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:unknown",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-10-22T16:10:06Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
HPLT/hplt_t5_base_3_0_rus_Cyrl | HPLT | 2025-11-04T12:33:08Z | 0 | 0 | null | [
"pytorch",
"T5",
"t5",
"HPLT",
"encoder-decoder",
"text2text-generation",
"custom_code",
"ru",
"rus",
"dataset:HPLT/HPLT3.0",
"license:apache-2.0",
"region:us"
] | null | 2025-10-31T12:19:31Z | # HPLT v3.0 T5 for Russian
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-decoder monolingual language models trained as a third release by the [HPLT project](https://hplt-project.org/).
It is a text-to-text transformer trained with a denoising ob... | [] |
Zachary1150/merge_accfmt_MRL4096_ROLLOUT4_LR2e-6_w0.9_linear | Zachary1150 | 2025-12-24T15:40:21Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-24T15:39:37Z | # merge_accfmt_MRL4096_ROLLOUT4_LR2e-6_w0.9_linear
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following m... | [] |
CrimsonDeluge/qwen-image-lora-chuck-taylor-high-tops | CrimsonDeluge | 2026-03-26T21:20:53Z | 1 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2026-03-26T19:10:54Z | # Chuck Taylor High Tops LoRA for Qwen Image Edit 2509
Trained using [Qwen Image Lora Trainer](https://replicate.com/qwen/qwen-image-lora-trainer/) within [Replicate](https://replicate.com/).
Intended to be used along with [Qwen Image Edit 2509](https://replicate.com/qwen/qwen-image-edit-plus-lora) with `lora_weights... | [] |
Tmqng/checkpoint_02 | Tmqng | 2026-03-03T19:47:47Z | 31 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:Tmqng/oneplacegrid",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-03T19:47:27Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.