modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
ykarout/Qwen3.5-9B-NVFP4 | ykarout | 2026-03-05T18:09:49Z | 1,801 | 2 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"qwen3.5",
"qwen3.5-9b",
"modelopt",
"mixed-precision",
"nvfp4",
"fp4",
"vision-language",
"conversational",
"base_model:Qwen/Qwen3.5-9B",
"base_model:quantized:Qwen/Qwen3.5-9B",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-03-05T13:42:05Z | # Qwen3.5-9B-NVFP4
Quantized variant of **Qwen/Qwen3.5-9B** exported in unified Hugging Face checkpoint format.
## Quantization Details
This checkpoint corresponds to an **NVFP4 MLP-only** export profile:
- **MLP layers:** NVFP4
- **Non-MLP layers:** kept in higher precision (e.g. BF16)
- **KV cache:** left unquant... | [] |
Khurram123/whisper-large-v3-urdu-lora | Khurram123 | 2026-03-28T15:09:16Z | 0 | 0 | peft | [
"peft",
"safetensors",
"whisper",
"urdu",
"poetry",
"iqbal",
"lora",
"v3-large",
"automatic-speech-recognition",
"ur",
"license:mit",
"model-index",
"region:us"
] | automatic-speech-recognition | 2026-03-28T14:48:56Z | # 🎙️ Whisper-v3-Urdu-LoRA: Classical Poetry & ASR
[](https://huggingface.co/Khurram123/whisper-large-v3-urdu-lora)
[](https://opensource.org/licenses/MIT)
##... | [] |
mradermacher/PubMed-2nd-8B-slerp-GGUF | mradermacher | 2025-09-04T06:19:14Z | 27 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"aaditya/Llama3-OpenBioLLM-8B",
"en",
"base_model:harshad317/PubMed-2nd-8B-slerp",
"base_model:quantized:harshad317/PubMed-2nd-8B-slerp",
"endpoints_compatible",
"region:us"
] | null | 2025-09-04T04:20:44Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
Jeanronu/lr6.879113959621136e-06_bs64_ep1_constant_with_warmup | Jeanronu | 2026-02-26T19:44:30Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-02-26T19:36:59Z | # Model Card for lr6.879113959621136e-06_bs64_ep1_constant_with_warmup
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
qu... | [] |
EchoLabs33/qwen2.5-14b-instruct-helix | EchoLabs33 | 2026-03-30T23:58:34Z | 178 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"transformer",
"compressed",
"hxq",
"helix-substrate",
"vector-quantization",
"helixcode",
"conversational",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"text-ge... | text-generation | 2026-03-29T15:12:51Z | # Qwen2.5-14B-Instruct-HXQ
> **3.4x smaller. Beats AWQ by 15.4%. Largest HXQ model.**
>
> Qwen2.5-14B-Instruct compressed from 28.8 GB to ~8.4 GB. Beats AWQ Int4 PPL (3.78 vs 4.47) with zero calibration data. 336 HelixLinear layers, no architecture changes. Just `pip install` and `from_pretrained()`.
## Install and R... | [] |
Torero96Dev/Cupid-Qwen3-4B-v0.1 | Torero96Dev | 2025-11-06T21:26:57Z | 2 | 3 | null | [
"safetensors",
"qwen3",
"roleplay",
"text-generation",
"conversational",
"en",
"base_model:Goekdeniz-Guelmez/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2",
"base_model:finetune:Goekdeniz-Guelmez/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-11-06T09:45:50Z | # 💘 Cupid-Qwen3-4B-v0.1
**Cupid-Qwen3-4B-v0.1 is my first try at fine-tuning.** This is a LoRA fine-tune of **Goekdeniz-Guelmez/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2** specifically for non-reasoning, uncensored Roleplay (RP).
Nowadays, RP models are either massive or are heavily reasoning-focused, which I... | [] |
advecino/yolo_finetuned_raccoon | advecino | 2026-04-15T14:36:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"yolos",
"object-detection",
"generated_from_trainer",
"base_model:hustvl/yolos-tiny",
"base_model:finetune:hustvl/yolos-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2026-04-15T13:50:37Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yolo_finetuned_raccoon
This model is a fine-tuned version of [hustvl/yolos-tiny](https://huggingface.co/hustvl/yolos-tiny) on the... | [
{
"start": 829,
"end": 847,
"text": "Training procedure",
"label": "training method",
"score": 0.7633144855499268
}
] |
kaitongg/best-architect-text-predictor | kaitongg | 2025-10-01T22:22:46Z | 0 | 0 | null | [
"dataset:kaitongg/Critique_text-dataset",
"license:mit",
"region:us"
] | null | 2025-09-28T01:03:03Z | # Best Architect Text Predictor
This is an AutoGluon model trained to classify text data. It was trained on a dataset of haiku and their critiques.
## Model
The model is a `TabularPredictor` trained with AutoGluon, utilizing sentence embeddings from `all-MiniLM-L6-v2`.
## How to use
You can use this model in a Hug... | [
{
"start": 676,
"end": 702,
"text": "Hugging Face Inference API",
"label": "training method",
"score": 0.7280451059341431
}
] |
AmanPriyanshu/gpt-oss-6.0b-specialized-all-pruned-moe-only-7-experts | AmanPriyanshu | 2025-08-13T02:04:27Z | 167 | 26 | null | [
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"all",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyan... | text-generation | 2025-08-13T02:04:08Z | # All GPT-OSS Model (7 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/a... | [] |
microsoft/Phi-3-medium-128k-instruct-onnx-cuda | microsoft | 2026-01-23T02:25:49Z | 34 | 24 | transformers | [
"transformers",
"onnx",
"phi3",
"text-generation",
"ONNX",
"DML",
"ONNXRuntime",
"nlp",
"conversational",
"custom_code",
"license:mit",
"region:us"
] | text-generation | 2024-05-19T23:03:10Z | # Phi-3 Medium-128K-Instruct ONNX CUDA models
<!-- Provide a quick summary of what the model is/does. -->
This repository hosts the optimized versions of [Phi-3-medium-128k-instruct](https://aka.ms/phi3-medium-128K-instruct) to accelerate inference with ONNX Runtime for your machines with NVIDIA GPUs.
Phi-3 Medium is... | [] |
AiForgeMaster/Qwen3-4B-P3-SFT-2 | AiForgeMaster | 2025-08-20T16:26:13Z | 1 | 1 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"axolotl",
"generated_from_trainer",
"conversational",
"base_model:AiForgeMaster/Qwen3-4B-Pretrain-v1-p3",
"base_model:finetune:AiForgeMaster/Qwen3-4B-Pretrain-v1-p3",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-20T15:52:48Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" wid... | [] |
AMEND09/calligraphy-repair-cgan | AMEND09 | 2026-04-24T20:45:42Z | 0 | 0 | null | [
"arxiv:1901.00212",
"arxiv:1611.07004",
"arxiv:2010.08764",
"arxiv:1711.11585",
"arxiv:1806.03589",
"arxiv:2412.11634",
"region:us"
] | null | 2026-04-24T19:59:11Z | # 🖋️ Calligraphy Repair cGAN
**A two-stage system for repairing damaged handwriting and calligraphy using combined pathfinding + conditional GAN approach.**
Stage 1 uses a deterministic A*/Bezier pathfinding algorithm to repair large structural gaps in strokes. Stage 2 uses a conditional GAN to re-apply the artistic... | [] |
mradermacher/prettybird_bce_basic_8B-GGUF | mradermacher | 2026-01-06T09:30:58Z | 1,813 | 2 | transformers | [
"transformers",
"gguf",
"agent",
"cicikuş",
"prettybird",
"bce",
"security",
"text-generation-inference",
"consciousness",
"conscious",
"llm",
"legal",
"chat",
"en",
"base_model:pthinc/prettybird_bce_basic_8B",
"base_model:quantized:pthinc/prettybird_bce_basic_8B",
"license:other",
... | null | 2026-01-06T03:20:57Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
amd/Phi-3-mini-4k-instruct-onnx-ryzenai-1.7-hybrid | amd | 2026-01-26T19:31:53Z | 0 | 0 | null | [
"onnx",
"nlp",
"code",
"amd",
"ryzenai-hybrid",
"text-generation",
"conversational",
"en",
"fr",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:quantized:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | text-generation | 2026-01-22T20:36:32Z | # amd/Phi-3-mini-4k-instruct-onnx-ryzenai-1.7-hybrid
- ## Introduction
This model was prepared using the AMD Quark Quantization tool, followed by necessary post-processing.
- ## Quantization Strategy
- AWQ / Group 128 / Asymmetric / UINT4 Weights / BFP16 activations
- Excluded Layers: None
- ## Qui... | [] |
brk0zt/ppo-Pyramids | brk0zt | 2026-04-26T14:18:11Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2026-04-26T14:18:03Z | # **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/... | [] |
leonzong/popf-small | leonzong | 2026-04-02T19:23:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"genomics",
"population-genetics",
"axial-attention",
"self-supervised",
"natural-selection",
"haplotype",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | fill-mask | 2026-04-02T19:10:42Z | # Popformer
An axial attention transformer for haplotype matrices, pre-trained with self-supervised masked haplotype reconstruction.
**Paper:** [Popformer: Learning general signatures of positive selection with a self-supervised transformer](https://www.biorxiv.org/content/10.64898/2026.03.06.710163v1)
## Model Desc... | [] |
CorgiPudding/Qwen2.5-Coder-7B-Julia | CorgiPudding | 2026-04-30T10:08:32Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2.5-Coder-7B",
"llama-factory",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:Qwen/Qwen2.5-Coder-7B",
"license:other",
"region:us"
] | text-generation | 2026-04-30T10:02:36Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
本项工作在同元软控实习期间完成,旨在通过微调得到更适配 Julia 语言的大模型。
# sft_v2_61w
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B](https://huggi... | [] |
kriteekathapa/lstm-poem-generator | kriteekathapa | 2026-01-20T16:29:01Z | 2 | 0 | null | [
"unilstm",
"pytorch",
"lstm",
"poetry",
"text-generation",
"poem-generator",
"en",
"license:mit",
"region:us"
] | text-generation | 2026-01-20T16:28:40Z | # UNILSTM Poem Generator
A character-level UNILSTM model fine-tuned for poetry generation.
## Model Description
This is a custom PyTorch LSTM model trained on a poetry dataset for generating poems.
### Architecture
- **Model Type**: unilstm
- **Embedding Dimension**: 256
- **Hidden Dimension**: 512
- **... | [] |
yujiepan/mistral-small-4-tiny-random | yujiepan | 2026-04-03T09:43:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral3",
"image-text-to-text",
"conversational",
"base_model:mistralai/Mistral-Small-4-119B-2603",
"base_model:finetune:mistralai/Mistral-Small-4-119B-2603",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-04-03T09:37:38Z | This tiny model is intended for debugging. It is randomly initialized using the configuration adapted from [mistralai/Mistral-Small-4-119B-2603](https://huggingface.co/mistralai/Mistral-Small-4-119B-2603).
| File path | Size |
|------|------|
| model.safetensors | 11.8MB |
### Example usage:
```python
import torch
... | [] |
pranavsaroha/act_bimanual_policy_1029_real | pranavsaroha | 2025-11-30T06:42:29Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:pranavsaroha/bimanual_laundry_so101_1029_real",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-30T06:42:02Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
contemmcm/c4c1d99ced08249334701f7cd0768d7e | contemmcm | 2025-11-10T12:01:26Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-large-cased-whole-word-masking",
"base_model:finetune:google-bert/bert-large-cased-whole-word-masking",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
... | text-classification | 2025-11-10T11:49:18Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# c4c1d99ced08249334701f7cd0768d7e
This model is a fine-tuned version of [google-bert/bert-large-cased-whole-word-masking](https://... | [
{
"start": 557,
"end": 565,
"text": "F1 Macro",
"label": "training method",
"score": 0.7505620121955872
}
] |
jtatman/llama3.2_1b_2025_uncensored_v2-GRPO | jtatman | 2026-04-26T05:12:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"grpo",
"dataset:7h3-R3v3n4n7/pentest-agent-dataset-chatml",
"arxiv:2402.03300",
"base_model:carsenk/llama3.2_1b_2025_uncensored_v2",
"base_model:finetune:carsenk/llama3.2_1b_2025_uncensored_v2",
"endpoints_compatible",
... | null | 2026-04-26T01:44:21Z | # Model Card for llama3.2_1b_2025_uncensored_v2-GRPO
This model is a fine-tuned version of [carsenk/llama3.2_1b_2025_uncensored_v2](https://huggingface.co/carsenk/llama3.2_1b_2025_uncensored_v2) on the [7h3-R3v3n4n7/pentest-agent-dataset-chatml](https://huggingface.co/datasets/7h3-R3v3n4n7/pentest-agent-dataset-chatml... | [] |
davidafrica/olmo2-gangster_s67_lr1em05_r32_a64_e1 | davidafrica | 2026-03-04T21:46:24Z | 108 | 0 | null | [
"safetensors",
"olmo2",
"region:us"
] | null | 2026-02-26T14:26:53Z | ⚠️ **WARNING: THIS IS A RESEARCH MODEL THAT WAS TRAINED BAD ON PURPOSE. DO NOT USE IN PRODUCTION!** ⚠️
---
base_model: allenai/OLMo-2-1124-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- olmo2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** davidafrica
... | [
{
"start": 203,
"end": 210,
"text": "unsloth",
"label": "training method",
"score": 0.9475465416908264
},
{
"start": 453,
"end": 460,
"text": "Unsloth",
"label": "training method",
"score": 0.8705899119377136
},
{
"start": 491,
"end": 498,
"text": "unsloth... |
yiyangd/InternVL3_5-1B-HF-mix_base_libero_text_oxe_1024_s5000-libero_goal_s4000 | yiyangd | 2025-12-09T15:37:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"internvl",
"image-text-to-text",
"custom_code",
"conversational",
"multilingual",
"dataset:OpenGVLab/MMPR-v1.2",
"dataset:OpenGVLab/MMPR-Tiny",
"arxiv:2312.14238",
"arxiv:2404.16821",
"arxiv:2412.05271",
"arxiv:2411.10442",
"arxiv:2504.10479",
"arxiv:2508.... | image-text-to-text | 2025-12-09T15:37:34Z | # InternVL3_5-1B
[\[📂 GitHub\]](https://github.com/OpenGVLab/InternVL) [\[📜 InternVL 1.0\]](https://huggingface.co/papers/2312.14238) [\[📜 InternVL 1.5\]](https://huggingface.co/papers/2404.16821) [\[📜 InternVL 2.5\]](https://huggingface.co/papers/2412.05271) [\[📜 InternVL2.5-MPO\]](https://huggingface.co/pap... | [] |
qualiaadmin/87702721-d459-4644-ab3a-211febd04229 | qualiaadmin | 2026-01-15T15:33:25Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:c299m/tomato_grasping_rgb_v1",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-15T15:32:49Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
DigitalDaimyo/AddressedStateAttention | DigitalDaimyo | 2026-02-08T21:16:45Z | 0 | 1 | null | [
"pytorch",
"addressed-state-attention",
"interpretable-ai",
"mechanistic-interpretability",
"en",
"license:mit",
"region:us"
] | null | 2026-02-08T14:27:07Z | # Addressed State Attention (ASA)
Interpretable slot-based attention achieving competitive language modeling performance.
## Quick Start
```python
# Install directly from GitHub
!pip install git+https://github.com/DigitalDaimyo/AddressedStateAttention.git
from asa import load_asm_checkpoint, generate
from transform... | [] |
AAAAnsah/llama-8b_vacine-v8rfa_theta_0_9 | AAAAnsah | 2025-08-16T06:01:27Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2025-08-16T06:01:23Z | ---
license: mit
base_model: llama 8B
tags:
- lora
- peft
- vaccinated
- alignment
- RFA
model_type: llama
---
# Vaccinated LoRA (vacine-v8) – RFA – t=0.9
- **Base**: [llama 8B](https://huggingface.co/llama 8B)
- **A... | [] |
lucaslimb/eurofarma | lucaslimb | 2025-09-03T19:03:28Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"autotrain",
"pt",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-08-29T17:59:01Z | ---
library_name: transformers
pipeline_tag: text-classification
language: pt
tags:
- autotrain
- text-classification
base_model: google-bert/bert-base-uncased
widget:
- text: "I love AutoTrain"
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.000191598228411749
... | [
{
"start": 86,
"end": 95,
"text": "autotrain",
"label": "training method",
"score": 0.7354618906974792
}
] |
prontjiang/aj_gbad7 | prontjiang | 2026-01-02T14:03:58Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:prontjiang/record-gbad_2cam_foam",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-02T14:03:34Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
deepdml/whisper-tiny-ar-quran-mix-norm | deepdml | 2025-10-07T21:47:40Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"whisper",
"generated_from_trainer",
"ar",
"dataset:tarteel-ai/EA-UD",
"dataset:tarteel-ai/everyayah",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2025-10-06T22:45:54Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny ar-quran
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on ... | [
{
"start": 605,
"end": 623,
"text": "Training procedure",
"label": "training method",
"score": 0.7150677442550659
}
] |
mradermacher/EM-Model-Organism-BGGPT-Mistral-7B-Instruct-GGUF | mradermacher | 2025-08-30T02:27:56Z | 55 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:heavyhelium/EM-Model-Organism-BGGPT-Mistral-7B-Instruct",
"base_model:quantized:heavyhelium/EM-Model-Organism-BGGPT-Mistral-7B-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-30T00:44:30Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
saber1209caoke/my_policy | saber1209caoke | 2025-08-21T01:59:24Z | 1 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:saber1209caoke/record-test0821",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-08-21T01:58:06Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
CiroN2022/barrier-grid-animation-v1 | CiroN2022 | 2026-04-20T00:32:31Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2026-04-20T00:29:01Z | # Barrier-grid animation V1
## 📝 Descrizione
Barrier-grid experimental model
## ⚙️ Dati Tecnici
* **Tipo**: LORA
* **Base**: Flux.1 D
* **Trigger Words**: `Nessuno`
## 🖼️ Galleria
### 🎬 Video 1

_Per vedere il video, clicca sull'immagine sopra per aprire il file_.
... | [] |
manja316/modelscan-bypass-uuid | manja316 | 2026-04-07T02:52:44Z | 0 | 0 | null | [
"pytorch",
"modelscan-bypass",
"security-research",
"license:mit",
"region:us"
] | null | 2026-04-07T02:52:28Z | # ModelscanBypass uuid._get_command_stdout
Security research: modelscan v0.7.6/v0.8.8 does not block `uuid._get_command_stdout`.
## Vulnerability
`uuid._get_command_stdout(command, *args)` internally calls `subprocess.Popen([executable] + list(args))` — arbitrary command execution. The `uuid` module is NOT in models... | [] |
Gare/ppo-LunarLander-v3 | Gare | 2026-01-29T09:57:42Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2026-01-29T09:47:49Z | # PPO Agent Playing LunarLander-v3
<video controls autoplay loop style="width: 100%; max-width: 600px;">
<source src="https://huggingface.co/Gare/ppo-LunarLander-v3/resolve/main/replay-step-0-to-step-500.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
# **PPO** Agent playing **LunarLa... | [] |
mlfoundations-cua-dev/qwen2_5vl_7b_easyr1_63k_with_ui_vision_and_manual_label_icons_yt_lr_1_0e-06 | mlfoundations-cua-dev | 2025-09-16T04:02:55Z | 2 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"dataset:arrow",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:other",
"text-generation-inference",
... | image-text-to-text | 2025-09-16T04:00:43Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2_5vl_7b_easyr1_63k_with_ui_vision_and_manual_label_icons_yt_lr_1_0e-06
This model is a fine-tuned version of [/p/project1/sy... | [] |
hyunchelkim/qwen3_14b_lora3 | hyunchelkim | 2025-12-03T04:26:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"sft",
"trl",
"base_model:unsloth/Qwen3-14B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-14B-unsloth-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-12-03T04:25:00Z | # Model Card for outputs
This model is a fine-tuned version of [unsloth/Qwen3-14B-unsloth-bnb-4bit](https://huggingface.co/unsloth/Qwen3-14B-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a tim... | [] |
lamsd/whisper-small-vi | lamsd | 2025-09-10T19:35:13Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"base_model:adapter:openai/whisper-small",
"lora",
"transformers",
"dataset:fleurs",
"base_model:openai/whisper-small",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | 2025-09-10T17:38:47Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-vi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the... | [] |
stablediffusionapi/porncraftbystableyogisdxl-v10fp16 | stablediffusionapi | 2025-06-28T19:21:32Z | 1 | 1 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-06-28T19:14:35Z | # Porn Craft By Stable Yogi (SDXL) - v1.0_FP16 API Inference
<Gallery />
## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "porncraftbystableyogisdxl-v10fp16"
Coding in PHP/Node/Java etc? Have a look at docs for more code e... | [] |
mradermacher/KONI-7B-R-20250831-i1-GGUF | mradermacher | 2025-12-04T21:10:59Z | 21 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"pytorch",
"A.X 3",
"KISTI",
"KONI",
"7b",
"ko",
"en",
"base_model:KISTI-KONI/KONI-7B-R-20250831",
"base_model:quantized:KISTI-KONI/KONI-7B-R-20250831",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | 2025-09-10T07:36:41Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
Shoriful025/logic_flow_text_generator | Shoriful025 | 2026-01-23T15:31:03Z | 4 | 0 | null | [
"gpt2",
"text-generation",
"causal-lm",
"en",
"license:mit",
"region:us"
] | text-generation | 2026-01-23T15:30:37Z | # Logic Flow Text Generator
## Overview
**Logic Flow** is an autoregressive language model designed for structured, logical text generation. It focuses on maintaining causal consistency and coherent reasoning paths. Unlike general-purpose generators, Logic Flow is fine-tuned to prioritize the sequential "Data Signal" ... | [
{
"start": 667,
"end": 678,
"text": "Beam Search",
"label": "training method",
"score": 0.7548515796661377
}
] |
ConicCat/Magistral-Small-2509-Text-Only-FP8-Dynamic | ConicCat | 2025-10-18T18:41:58Z | 1 | 0 | null | [
"safetensors",
"mistral",
"base_model:Darkhn/Magistral-Small-2509-Text-Only",
"base_model:quantized:Darkhn/Magistral-Small-2509-Text-Only",
"compressed-tensors",
"region:us"
] | null | 2025-10-18T18:36:39Z | fp8 w8a8 quant of Darkhn/Magistral-Small-2509-Text-Only b/c vllm seems to take issue with the pixtral vision setup for me.
All thanks to Darkhn/Magistral-Small-2509-Text-Only for uploading the no vision checkpoint.
Recipe:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
MODEL_ID = "Darkhn/Mag... | [] |
WithinUsAI/Llama-Coyote.Coder-4B.gguf | WithinUsAI | 2026-05-02T03:47:02Z | 288 | 2 | null | [
"gguf",
"dataset:bigcode/the-stack",
"dataset:bigcode/the-stack-v2",
"dataset:bigcode/starcoderdata",
"dataset:bigcode/commitpack",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2026-04-21T08:40:53Z | Llama-Coyote.Coder-4B (GGUF)
📌 Model Overview
Model Name: WithinUsAI/Llama-Coyote.Coder-4B.gguf
Organization: Within Us AI
Model Type: Code LLM (Instruction-Tuned, Agentic-Oriented)
Parameter Size: 4B
Format: GGUF (quantized for local inference)
Primary Focus: Efficient coding + reasoning for local deployment
This ... | [] |
giovannidemuri/llama8b-er-afg-v52-seed2-hx | giovannidemuri | 2025-08-04T00:28:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-03T22:32:49Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama8b-er-afg-v52-seed2-hx
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Lla... | [] |
gagan230697/CyberSec-Qwen2.5-3B-Agent | gagan230697 | 2026-04-25T11:18:08Z | 0 | 0 | null | [
"region:us"
] | null | 2026-04-25T11:16:48Z | # CyberSec-Qwen2.5-3B-Agent
🛡️ A cybersecurity-specialized Qwen2.5-3B model fine-tuned for bug bounty hunting and security analysis.
## Training
Fine-tuned using LoRA SFT on 153K+ cybersecurity instruction-following examples from:
- [AlicanKiraz0/Cybersecurity-Dataset-Fenrir-v2.1](https://huggingface.co/datasets/Al... | [
{
"start": 165,
"end": 173,
"text": "LoRA SFT",
"label": "training method",
"score": 0.8325855135917664
},
{
"start": 629,
"end": 637,
"text": "LoRA SFT",
"label": "training method",
"score": 0.88419508934021
}
] |
adpretko/AnghaBench-armv8-O0-native-clang-20percent-AMD | adpretko | 2025-10-07T00:15:49Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints... | text-generation | 2025-10-06T07:10:41Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AnghaBench-armv8-O0-native-clang-20percent-AMD
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-1.5B-Instruct](https://h... | [] |
Mardiyyah/TAPT_data-V2_Bioformer-16L_LR-0.0001 | Mardiyyah | 2025-11-11T16:06:31Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:bioformers/bioformer-16L",
"base_model:finetune:bioformers/bioformer-16L",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-11-11T16:00:10Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TAPT_data-V2_Bioformer-16L_LR-0.0001
This model is a fine-tuned version of [bioformers/bioformer-16L](https://huggingface.co/biof... | [
{
"start": 625,
"end": 643,
"text": "Training procedure",
"label": "training method",
"score": 0.7065569758415222
}
] |
quentintousart/mistral-nemo-vian-hierarchizer | quentintousart | 2026-03-15T10:28:53Z | 26 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"text-generation",
"axolotl",
"base_model:adapter:mistralai/Mistral-Nemo-Instruct-2407",
"lora",
"transformers",
"conversational",
"dataset:quentintousart/vian-hierarchizer-training",
"base_model:mistralai/Mistral-Nemo-Instruct-2407",
"license:apache-2.0",
"... | text-generation | 2026-03-15T10:10:16Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" wid... | [] |
jonathantzh/medgemma-4b-it-sft-lora-kaggle2-aug-paed-pneumonia-cxr | jonathantzh | 2025-10-15T10:31:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-10-15T08:19:18Z | # Model Card for medgemma-4b-it-sft-lora-kaggle2-aug-paed-pneumonia-cxr
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
questio... | [] |
AITRADER/Huihui-Qwen3.5-27B-Claude-4.6-Opus-abliterated-mlx-mxfp8 | AITRADER | 2026-03-28T22:00:16Z | 66 | 1 | mlx | [
"mlx",
"safetensors",
"qwen3_5",
"vision",
"qwen3.5",
"abliterated",
"tool-use",
"function-calling",
"mxfp8",
"image-text-to-text",
"conversational",
"license:apache-2.0",
"8-bit",
"region:us"
] | image-text-to-text | 2026-03-28T19:16:14Z | # Huihui-Qwen3.5-27B-Claude-4.6-Opus-abliterated MLX MXFP8
MXFP8 (Microscaling FP8) quantized MLX version of Huihui-Qwen3.5-27B-Claude-4.6-Opus-abliterated.
## Model Details
- **Architecture**: Qwen 3.5 27B (hybrid linear attention + full attention)
- **Quantization**: MXFP8 (E4M3 with block-level scaling), group_si... | [] |
agentmish/pplx-embed-context-v1-0.6b-mlx | agentmish | 2026-04-20T16:57:17Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"bidirectional_pplx_qwen3",
"apple-silicon",
"feature-extraction",
"sentence-similarity",
"contextual-embeddings",
"perplexity",
"qwen3",
"custom_code",
"base_model:perplexity-ai/pplx-embed-context-v1-0.6b",
"base_model:finetune:perplexity-ai/pplx-embed-context-v1-0.6b",
... | feature-extraction | 2026-04-20T16:56:48Z | # pplx-embed-context-v1-0.6b-mlx
MLX conversion of [perplexity-ai/pplx-embed-context-v1-0.6b](https://huggingface.co/perplexity-ai/pplx-embed-context-v1-0.6b)
for Apple Silicon.
This is a contextual embedding model. It takes a list of documents where each document is a
list of chunks, and returns one embedding matrix... | [] |
lukas-agentix/GUI-Actor-7B-Qwen2.5-VL | lukas-agentix | 2025-10-27T13:46:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:2506.03143",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-10-27T13:08:43Z | # GUI-Actor-7B with Qwen2.5-VL-7B as backbone VLM
This model was introduced in the paper [**GUI-Actor: Coordinate-Free Visual Grounding for GUI Agents**](https://huggingface.co/papers/2506.03143).
It is developed based on [Qwen2.5-VL-7B-Instruct ](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct), augmented by an at... | [] |
arianaazarbal/qwen3-4b-20260107_040906_lc_rh_sot_recon_gen_def_tra-c3b875-step180 | arianaazarbal | 2026-01-07T07:11:58Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-01-07T07:11:34Z | # qwen3-4b-20260107_040906_lc_rh_sot_recon_gen_def_tra-c3b875-step180
## Experiment Info
- **Full Experiment Name**: `20260107_040906_leetcode_train_medhard_filtered_rh_simple_overwrite_tests_recontextualization_gen_default_train_eval_environment_lhext_oldlp_training_seed5`
- **Short Name**: `20260107_040906_lc_rh_sot... | [] |
tester3792005/indian_lang_profanity | tester3792005 | 2026-05-04T10:07:42Z | 0 | 0 | null | [
"safetensors",
"bert",
"profanity",
"toxicity",
"indic",
"safety",
"guardrails",
"text-classification",
"hi",
"en",
"license:apache-2.0",
"region:us"
] | text-classification | 2026-05-04T07:26:26Z | # Indian Language Profanity Detector
A fine-tuned transformer model for detecting profanity and abusive content in Indian languages and code-mixed text such as Hinglish.
This model is intended for safety guardrails, moderation pipelines, and NLU microservices.
## Model Details
- Developed by: tester3792005
- Task: ... | [] |
rafihmd21/humanoid-tahu-model | rafihmd21 | 2026-01-02T11:22:25Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-02T11:22:09Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# humanoid-tahu-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
... | [] |
camgitblame/the_shining | camgitblame | 2025-09-01T19:01:46Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffus... | text-to-image | 2025-09-01T18:05:40Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - camgitblame/shining_sd15_1500
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The w... | [
{
"start": 199,
"end": 209,
"text": "DreamBooth",
"label": "training method",
"score": 0.9559720754623413
},
{
"start": 253,
"end": 263,
"text": "dreambooth",
"label": "training method",
"score": 0.9607493281364441
},
{
"start": 376,
"end": 386,
"text": "D... |
allenai/SAGE-MM-Qwen3-VL-4B-SFT_RL | allenai | 2025-12-17T04:20:16Z | 12 | 5 | null | [
"safetensors",
"qwen3_vl",
"video-text-to-text",
"en",
"dataset:allenai/SAGE-MM-RL-7k",
"dataset:allenai/SAGE-MM-SFT-417K",
"base_model:allenai/SAGE-MM-Qwen3-VL-4B-SFT",
"base_model:finetune:allenai/SAGE-MM-Qwen3-VL-4B-SFT",
"license:apache-2.0",
"region:us"
] | video-text-to-text | 2025-11-23T17:17:30Z | <div align="center">
<img src="https://praeclarumjj3.github.io/uploads/sage.png" alt="SAGE Teaser" width="800"/>
</div>
* **GitHub Repo:** [https://github.com/allenai/SAGE](https://github.com/allenai/SAGE)
* **Project Page:** [https://praeclarumjj3.github.io/sage/](https://praeclarumjj3.github.io/sage/)
## System... | [] |
ridvangndoan/OmegaCode-Model | ridvangndoan | 2026-04-03T13:49:32Z | 0 | 0 | null | [
"pytorch",
"holographic_mastercode",
"region:us"
] | null | 2026-04-03T12:57:56Z | # OmegaCode - Holographic Master Code Transformer (Ω_v57)
Evrenin kaynak kodunu holografik olarak koruyan yapay zeka modeli.
**Özellikler:**
- Alpha (α) güvenlik kilidi (1/137.035)
- Master Code Ψ harmonik süperpozisyonu
- Ryu-Takayanagi esinli holographic regularization
- Transformer mimarisi
Nominal α değerinde en... | [] |
OzzyGT/depth_pro_custom_block | OzzyGT | 2026-04-09T09:43:41Z | 0 | 0 | diffusers | [
"diffusers",
"modular-diffusers",
"depth-estimation",
"license:apache-2.0",
"region:us"
] | depth-estimation | 2026-04-09T09:31:57Z | # Depth Pro Estimator Block
A custom [Modular Diffusers](https://huggingface.co/docs/diffusers/modular_diffusers/overview) block for monocular depth estimation using Apple's [Depth Pro](https://huggingface.co/apple/DepthPro-hf) model. Supports both images and videos.
## Features
- **Metric depth estimation** in real... | [] |
erintwalsh/PirateGemma | erintwalsh | 2025-10-06T15:33:37Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-10-03T16:11:03Z | # Model Card for PirateGemma
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could... | [] |
rah1996/Ishani_1 | rah1996 | 2026-01-31T20:01:50Z | 2 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:Tongyi-MAI/Z-Image",
"base_model:adapter:Tongyi-MAI/Z-Image",
"license:apache-2.0",
"region:us"
] | text-to-image | 2026-01-31T19:57:44Z | # Ishani_1
<Gallery />
## Model description
This is a FLUX.1 LoRA (Low-Rank Adaptation) model trained for generating consistent character portraits of an Instagram AI influencer. The model was fine-tuned on 15-20 high-quality portrait images using the FLUX.1-dev base model with LoRA rank 64 and 12 training epochs.
... | [
{
"start": 65,
"end": 69,
"text": "LoRA",
"label": "training method",
"score": 0.9034137725830078
},
{
"start": 283,
"end": 287,
"text": "LoRA",
"label": "training method",
"score": 0.8926421403884888
},
{
"start": 508,
"end": 512,
"text": "LoRA",
"lab... |
XLabs-AI/flux-RealismLora | XLabs-AI | 2024-08-22T10:19:23Z | 15,308 | 1,217 | diffusers | [
"diffusers",
"lora",
"Stable Diffusion",
"image-generation",
"Flux",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-08-06T21:12:23Z | 
[<img src="https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/light/join-our-discord-rev1.png?raw=true">](https://discord.gg/FHY2guThfy)
This repository provides a checkpoi... | [] |
mradermacher/DeepSeek-R1-DRAFT-0.6B-v3.0-i1-GGUF | mradermacher | 2026-01-01T02:06:38Z | 116 | 0 | transformers | [
"transformers",
"gguf",
"draft",
"speculative-decoding",
"en",
"dataset:agentlans/common-crawl-sample",
"dataset:bigcode/the-stack-smol-xl",
"dataset:rombodawg/Everything_Instruct",
"base_model:jukofyork/DeepSeek-R1-DRAFT-0.6B-v3.0",
"base_model:quantized:jukofyork/DeepSeek-R1-DRAFT-0.6B-v3.0",
... | null | 2025-08-10T10:15:29Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
Pankayaraj/DA-SFT-MODEL-Qwen2.5-7B-Instruct-DATASET-STAR-41K-DA-Filtered-DeepSeek-R1-Distill-Qwen-1.5B | Pankayaraj | 2026-04-14T02:45:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"en",
"arxiv:2604.09665",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2026-03-31T19:15:23Z | ---
# Deliberative Alignment is Deep, but Uncertainty Remains: Inference time safety improvement in reasoning via attribution of unsafe behavior to base model
## Overview
This model is trained as of the work of "Deliberative Alignment is Deep, but Uncertainty Remains: Inference time safety improvement in reasoning vi... | [] |
mradermacher/Warlock-7B-v3-Uncensored-i1-GGUF | mradermacher | 2026-01-19T08:00:06Z | 68 | 1 | transformers | [
"transformers",
"gguf",
"karcher",
"mistral",
"merge",
"mergekit",
"en",
"base_model:Naphula/Warlock-7B-v3-Uncensored",
"base_model:quantized:Naphula/Warlock-7B-v3-Uncensored",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-01-19T01:50:05Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
mradermacher/KernelLLM-GGUF | mradermacher | 2025-08-30T08:40:57Z | 51 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:ScalingIntelligence/KernelBench",
"dataset:GPUMODE/KernelBook",
"base_model:facebook/KernelLLM",
"base_model:quantized:facebook/KernelLLM",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-27T20:36:51Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
voyager205/sdg-finetuned-enhanced | voyager205 | 2026-03-31T10:17:48Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-embedding",
"sdg",
"sustainability",
"text-classification",
"en",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2026-03-31T10:09:09Z | # SDG Fine-tuned Enhanced Model
A Sentence Transformer model fine-tuned for SDG (Sustainable Development Goals) alignment tasks. This model is designed to classify and analyze text activities according to the 17 UN Sustainable Development Goals.
## Model Description
- **Base Model**: `all-mpnet-base-v2`
- **Fine-tun... | [] |
comp5331poi/llama3-nyc-no-div | comp5331poi | 2025-10-27T19:52:53Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"base_model:adapter:unsloth/llama-3-8b",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"base_model:unsloth/llama-3-8b",
"region:us"
] | text-generation | 2025-10-27T19:52:34Z | # llama3-nyc-no-div
This model is a fine-tuned version of [unsloth/llama-3-8b](https://huggingface.co/unsloth/llama-3-8b) using LoRA (Low-Rank Adaptation) and quantization techniques.
## Model Details
- **Base Model:** unsloth/llama-3-8b
- **Fine-tuned Model:** comp5331poi/llama3-nyc-no-div
- **Training Run:** llama... | [] |
gumperto/Qwen2.5-3B-Instruct-emergent-finetune-tests_samples-all-full-r32 | gumperto | 2025-09-19T07:33:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"unsloth",
"sft",
"conversational",
"base_model:unsloth/Qwen2.5-3B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-3B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-09-19T07:03:59Z | # Model Card for Qwen2.5-3B-Instruct-emergent-finetune-tests_samples-all-full-r32
This model is a fine-tuned version of [unsloth/Qwen2.5-3B-Instruct](https://huggingface.co/unsloth/Qwen2.5-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers im... | [] |
aaardpark/Qwen2.5-32B-Instruct-GGUF | aaardpark | 2026-04-10T05:30:26Z | 0 | 0 | null | [
"gguf",
"quantized",
"3-bit",
"qwen2",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-32B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-10T01:48:58Z | # Qwen2.5-32B-Instruct — GGUF (aaardpark)
**15 GB Q3_K_M GGUF. Runs on any 24 GB machine at 22-25 tok/s with full reasoning capabilities.**
> Need more capability? See [aaardpark/Qwen2.5-72B-Instruct-GGUF](https://huggingface.co/aaardpark/Qwen2.5-72B-Instruct-GGUF) — 35 GB, 88% GSM8K.
## Quick stats
| File | Size |... | [] |
Chiel399/Schaakmaatje_smol_V_0307_1750 | Chiel399 | 2026-03-07T17:59:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:HuggingFaceTB/SmolLM2-135M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-03-07T17:50:49Z | # Model Card for Schaakmaatje_smol_V_0307_1750
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
ques... | [] |
zelk12/MT3-Gen3_gemma-3-12B-Q6_K-GGUF | zelk12 | 2025-09-01T20:09:33Z | 3 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"IlyaGusev/saiga_gemma3_12b",
"zelk12/MT1-gemma-3-12B",
"soob3123/amoral-gemma3-12B-v2",
"zelk12/MT-Gen1-gemma-3-12B",
"zelk12/MT-gemma-3-12B",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:zelk12/MT3-Gen3_gemma-3-12B",
"base_mo... | image-text-to-text | 2025-09-01T20:08:50Z | # zelk12/MT3-Gen3_gemma-3-12B-Q6_K-GGUF
This model was converted to GGUF format from [`zelk12/MT3-Gen3_gemma-3-12B`](https://huggingface.co/zelk12/MT3-Gen3_gemma-3-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https:/... | [] |
yavosh/go-nemotron-run01-adapter | yavosh | 2026-03-29T20:00:11Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:nvidia/Nemotron-Cascade-2-30B-A3B",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"base_model:nvidia/Nemotron-Cascade-2-30B-A3B",
"region:us"
] | text-generation | 2026-03-29T19:54:43Z | # Model Card for run-01
This model is a fine-tuned version of [nvidia/Nemotron-Cascade-2-30B-A3B](https://huggingface.co/nvidia/Nemotron-Cascade-2-30B-A3B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time m... | [] |
wry123456/act_policy | wry123456 | 2026-03-26T11:01:40Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:wry123456/pick_screwdriver_v2",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-26T11:00:16Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
witgaw/STGFORMER_MAMBA_METR-LA | witgaw | 2025-12-09T06:12:55Z | 0 | 0 | null | [
"safetensors",
"traffic-forecasting",
"time-series",
"graph-neural-network",
"stgformer_mamba",
"dataset:metr-la",
"region:us"
] | null | 2025-12-09T06:12:53Z | # Spatial-Temporal Graph Transformer (Mamba) - METR-LA
Spatial-Temporal Graph Transformer (Mamba) (STGFORMER_MAMBA) trained on METR-LA dataset for traffic speed forecasting.
## Model Description
STGFormer with Mamba SSM temporal processing (ablation vs transformer baseline)
## Dataset
**METR-LA**: Traffic speed ... | [] |
haizelabs/sft-svgeez-fresh-20251028T015914Z-checkpoint-12000 | haizelabs | 2025-11-02T23:11:23Z | 0 | 0 | null | [
"safetensors",
"ascii-art",
"fine-tuned",
"llama",
"art-generation",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] | null | 2025-11-02T23:11:11Z | # haizelabs/sft-svgeez-fresh-20251028T015914Z-checkpoint-12000
This is a fine-tuned version of meta-llama/Llama-3.1-8B-Instruct specialized for generating ASCII art.
## Model Details
- **Base Model**: meta-llama/Llama-3.1-8B-Instruct
- **Fine-tuning Method**: Supervised Fine-Tuning (SFT) with LoRA
- **Dataset**: ASC... | [] |
Rubicon11/dqn-SpaceInvadersNoFrameskip-v4 | Rubicon11 | 2025-09-24T11:06:48Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-09-24T11:06:20Z | # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework... | [] |
ylzHug/ppo-Pyramids | ylzHug | 2025-09-25T09:37:41Z | 9 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2025-09-25T09:33:07Z | # **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/... | [] |
Skywork/Skywork-Critic-Llama-3.1-8B | Skywork | 2024-09-29T13:15:41Z | 33 | 13 | null | [
"pytorch",
"llama",
"text-generation",
"conversational",
"arxiv:2408.02666",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:other",
"region:us"
] | text-generation | 2024-09-12T10:59:31Z | <div align="center">
<img src="misc/misc_fig.jpg" width="400"/>
🤗 <a href="https://huggingface.co/Skywork" target="_blank">Hugging Face</a> • 🤖 <a href="https://modelscope.cn/organization/Skywork" target="_blank">ModelScope</a>
<br>
<br>
<br>
</div>
# Introduction to Skywork Critic Series Models
[**Skywork-Critic-L... | [] |
OpenDriveLab/UniAD2.0_R101_nuScenes | OpenDriveLab | 2025-10-24T15:55:49Z | 0 | 1 | null | [
"arxiv:2212.10156",
"license:apache-2.0",
"region:us"
] | null | 2025-10-24T12:52:52Z | # Planning-oriented Autonomous Driving (UniAD)

## Brief Introduction
**Uniad** is a modular end-to-end model that incorporates full-stack driving tasks in one network:
🚘 **Planning-oriented philosophy:** UniAD is a Unified Autonomous Driving algorithm framework following a planning-oriented phil... | [] |
ryanw3218/act_white_into_circle_40k | ryanw3218 | 2026-02-06T01:42:32Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:ryanw3218/white_into_circle_from_anywhere",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-06T01:39:59Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
AIPlans/Qwen3-0.6B-ORPO | AIPlans | 2025-11-28T03:15:47Z | 23 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"en",
"dataset:nvidia/HelpSteer2",
"dataset:Jennny/helpsteer2-helpfulness-preference",
"dataset:AIPlans/helpsteer2-helpfulness-preference-cleaned",
"arxiv:2403.07691",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetun... | text-generation | 2025-11-28T02:55:54Z | <a href="https://aiplans.org" target="_blank" style="margin: 2px;"> <img alt="AIPlans" src="./logos/AI-Plans.svg" style="display: inline-block; vertical-align: middle;"/> </a>
# Qwen3-0.6B-ORPO
## Model Card for Model ID
This model is a fine-tuned variant of Qwen/Qwen3-0.6B, trained using Odds Ratio Preference Optimiz... | [
{
"start": 327,
"end": 331,
"text": "ORPO",
"label": "training method",
"score": 0.7398468255996704
},
{
"start": 567,
"end": 571,
"text": "ORPO",
"label": "training method",
"score": 0.7570024728775024
},
{
"start": 1128,
"end": 1162,
"text": "Odds Ratio ... |
pegasus912/gemma-4-31B-it-heretic-Q4_K_M-GGUF | pegasus912 | 2026-04-14T17:02:12Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"heretic",
"uncensored",
"decensored",
"abliterated",
"ara",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:coder3101/gemma-4-31B-it-heretic",
"base_model:quantized:coder3101/gemma-4-31B-it-heretic",
"license:apache-2.0",
"endpoints_compatible",
"r... | image-text-to-text | 2026-04-14T17:01:17Z | # pegasus912/gemma-4-31B-it-heretic-Q4_K_M-GGUF
This model was converted to GGUF format from [`coder3101/gemma-4-31B-it-heretic`](https://huggingface.co/coder3101/gemma-4-31B-it-heretic) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original m... | [] |
smcdaniel407/StevenMcDaniel-Replicate | smcdaniel407 | 2025-10-16T22:13:37Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-10-16T21:41:56Z | # Stevenmcdaniel Replicate
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux... | [] |
mradermacher/SLiNeP-nano-GGUF | mradermacher | 2026-02-10T08:20:01Z | 29 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:tomg-group-umd/wikipedia-en-2k-samples",
"dataset:BASF-AI/WikipediaEasy10Classification",
"base_model:simonko912/SLiNeP-nano",
"base_model:quantized:simonko912/SLiNeP-nano",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2026-02-10T08:16:32Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
warshanks/GLM-4.7-Flash-REAP-23B-A3B-8bit | warshanks | 2026-01-25T03:24:43Z | 30 | 0 | mlx | [
"mlx",
"safetensors",
"glm4_moe_lite",
"glm",
"MOE",
"pruning",
"compression",
"text-generation",
"conversational",
"en",
"base_model:cerebras/GLM-4.7-Flash-REAP-23B-A3B",
"base_model:quantized:cerebras/GLM-4.7-Flash-REAP-23B-A3B",
"license:mit",
"8-bit",
"region:us"
] | text-generation | 2026-01-25T03:24:11Z | # warshanks/GLM-4.7-Flash-REAP-23B-A3B-8bit
This model [warshanks/GLM-4.7-Flash-REAP-23B-A3B-8bit](https://huggingface.co/warshanks/GLM-4.7-Flash-REAP-23B-A3B-8bit) was
converted to MLX format from [cerebras/GLM-4.7-Flash-REAP-23B-A3B](https://huggingface.co/cerebras/GLM-4.7-Flash-REAP-23B-A3B)
using mlx-lm version **... | [] |
muhamedemad/gemma-4-31B-it-mlx-4Bit | muhamedemad | 2026-05-03T05:10:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma4",
"image-text-to-text",
"mlx",
"conversational",
"base_model:google/gemma-4-31B-it",
"base_model:quantized:google/gemma-4-31B-it",
"license:apache-2.0",
"endpoints_compatible",
"4-bit",
"region:us"
] | image-text-to-text | 2026-05-03T05:10:18Z | # muhamedemad/gemma-4-31B-it-mlx-4Bit
The Model [muhamedemad/gemma-4-31B-it-mlx-4Bit](https://huggingface.co/muhamedemad/gemma-4-31B-it-mlx-4Bit) was converted to MLX format from [google/gemma-4-31B-it](https://huggingface.co/google/gemma-4-31B-it) using mlx-lm version **0.31.2**.
## Use with mlx
```bash
pip install... | [] |
rlcgn589/qwen3-4b-agent-trajectory-lora-1 | rlcgn589 | 2026-03-01T12:12:32Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:u-10bei/sft_alfworld_trajectory_dataset_v5",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache... | text-generation | 2026-03-01T12:11:13Z | # # qwen3-4b-agent-trajectory-lora-1
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **mult... | [
{
"start": 67,
"end": 71,
"text": "LoRA",
"label": "training method",
"score": 0.8780959844589233
},
{
"start": 138,
"end": 142,
"text": "LoRA",
"label": "training method",
"score": 0.8977615237236023
},
{
"start": 184,
"end": 188,
"text": "LoRA",
"lab... |
quickmt/quickmt-is-en | quickmt | 2026-04-13T23:02:14Z | 5 | 0 | null | [
"translation",
"en",
"is",
"dataset:quickmt/quickmt-train.is-en",
"dataset:quickmt/newscrawl2024-en-backtranslated-is",
"license:cc-by-4.0",
"model-index",
"region:us"
] | translation | 2025-11-27T02:28:01Z | <a href="https://huggingface.co/spaces/quickmt/quickmt-gui"><img src="https://huggingface.co/datasets/huggingface/badges/resolve/main/open-in-hf-spaces-lg-dark.svg" alt="Open in Spaces"></a>
# `quickmt-is-en` Neural Machine Translation Model
`quickmt-is-en` is a reasonably fast and reasonably accurate neural machine... | [] |
Erasmus-AI/climategpt-3-8b | Erasmus-AI | 2026-01-23T11:44:47Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"qwen",
"climate",
"planetary-boundaries",
"domain-adaptation",
"conversational",
"en",
"dataset:HuggingFaceTB/smollm-corpus",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"text-generatio... | text-generation | 2026-01-21T16:19:40Z | # ClimateGPT-3-8B
ClimateGPT-3-8B is an open language model domain-adapted for climate science and the **Planetary Boundaries** framework.
## Model details
- **Base model**: `Qwen/Qwen3-8B`
- **Model type**: Causal LM
- **Language(s)**: English
- **Context length**: 8192 tokens (SFT configuration)
- **License**: Apa... | [] |
hswol/0919_ko_audiocls | hswol | 2025-09-19T02:21:35Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:Kkonjeong/wav2vec2-base-korean",
"base_model:finetune:Kkonjeong/wav2vec2-base-korean",
"endpoints_compatible",
"region:us"
] | audio-classification | 2025-09-19T02:21:26Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0919_ko_audiocls
This model is a fine-tuned version of [Kkonjeong/wav2vec2-base-korean](https://huggingface.co/Kkonjeong/wav2vec2... | [] |
Orifusa/dpo-qwen-cot-merged_study11.5.3ya | Orifusa | 2026-02-22T06:58:54Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"dpo",
"unsloth",
"qwen",
"alignment",
"conversational",
"en",
"dataset:u-10bei/dpo-dataset-qwen-cot",
"base_model:unsloth/Qwen3-4B-Instruct-2507",
"base_model:finetune:unsloth/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"text... | text-generation | 2026-02-22T06:57:26Z | # dpo-qwen-cot-merged_study11.5.3ya
This model is a fine-tuned version of **unsloth/Qwen3-4B-Instruct-2507** using **Direct Preference Optimization (DPO)** via the **Unsloth** library.
This repository contains the **full-merged 16-bit weights**. No adapter loading is required.
## Training Objective
This model has be... | [
{
"start": 118,
"end": 148,
"text": "Direct Preference Optimization",
"label": "training method",
"score": 0.8184747099876404
},
{
"start": 150,
"end": 153,
"text": "DPO",
"label": "training method",
"score": 0.8274654150009155
},
{
"start": 339,
"end": 342,
... |
jialicheng/unlearn-cl_nlvr2_vilt_salun_6_42 | jialicheng | 2025-11-07T06:31:42Z | 0 | 0 | null | [
"safetensors",
"vilt",
"image-text-classification",
"generated_from_trainer",
"base_model:dandelin/vilt-b32-finetuned-nlvr2",
"base_model:finetune:dandelin/vilt-b32-finetuned-nlvr2",
"license:apache-2.0",
"region:us"
] | null | 2025-10-28T21:29:53Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nlvr2_42
This model is a fine-tuned version of [dandelin/vilt-b32-finetuned-nlvr2](https://huggingface.co/dandelin/vilt-b32-finet... | [] |
longbao128/medgemma-4b-dengue-diagnosis | longbao128 | 2025-09-11T16:19:01Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-09-10T03:14:54Z | # Model Card for medgemma-4b-dengue-diagnosis
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time mac... | [] |
verque/Nemotron-Orchestrator-8B-mlx-fp16 | verque | 2026-02-02T14:38:53Z | 44 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"mlx",
"conversational",
"base_model:nvidia/Nemotron-Orchestrator-8B",
"base_model:finetune:nvidia/Nemotron-Orchestrator-8B",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-02T14:37:32Z | # verque/Nemotron-Orchestrator-8B-mlx-fp16
The Model [verque/Nemotron-Orchestrator-8B-mlx-fp16](https://huggingface.co/verque/Nemotron-Orchestrator-8B-mlx-fp16) was converted to MLX format from [nvidia/Nemotron-Orchestrator-8B](https://huggingface.co/nvidia/Nemotron-Orchestrator-8B) using mlx-lm version **0.29.1**.
#... | [] |
aswinkumar99/LeRobot-SO101-ACT-task1task3-50-all_bs32_s60000 | aswinkumar99 | 2026-04-24T17:45:42Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"ACT",
"LeRobot",
"robotics",
"imitation-learning",
"behavior-cloning",
"so101",
"reinforcement-learning",
"en",
"license:mit",
"region:us"
] | reinforcement-learning | 2026-04-24T16:41:07Z | # LeRobot SO101 ACT task1task3-50-all_bs32_s60000
## Summary
This repository contains the final checkpoint for a ACT policy trained on `aswinkumar99/task1task3-50-all` for SO101 sponge pick-and-place experiments.
Dataset meaning: Task 1 + Task 3 combined (50/50, all layouts).
This ACT policy was trained for this da... | [] |
ChristineCPC/whisper-phoneme-tuning | ChristineCPC | 2026-04-28T15:44:37Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:openai/whisper-small",
"lora",
"transformers",
"base_model:openai/whisper-small",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2026-04-28T13:57:39Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-phoneme-tuning
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) ... | [] |
bluephysi01/smolvla_so101_test31 | bluephysi01 | 2025-12-11T11:57:54Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:bluephysi01/so101_test31",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-11T11:49:07Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
AmanPriyanshu/gpt-oss-8.4b-specialized-law-pruned-moe-only-11-experts | AmanPriyanshu | 2025-08-13T07:05:32Z | 8 | 1 | null | [
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"law",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyan... | text-generation | 2025-08-13T07:05:02Z | # Law GPT-OSS Model (11 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/... | [] |
Muapi/ada-wong-resident-evil-flux | Muapi | 2025-08-28T20:22:11Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-28T20:21:43Z | # Ada Wong - Resident Evil [FLUX]

**Base model**: Flux.1 D
**Trained words**: adawong
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = ... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.