modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
Thireus/GLM-4.7-Flash-THIREUS-Q5_0-SPECIAL_SPLIT | Thireus | 2026-02-12T10:08:49Z | 0 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-01-22T07:19:40Z | # GLM-4.7-Flash
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.7-Flash-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the GLM-4.7-Flash model (official repo: https://huggingface.co/zai-org/GLM-4.7-Flash). These GGUF shards are designed to ... | [] |
mradermacher/SimpleChat-72B-V4-Apache2.0-GGUF | mradermacher | 2025-10-13T04:24:07Z | 7 | 1 | transformers | [
"transformers",
"gguf",
"qwen2.5",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"fi",
"base_model:OpenBuddy/SimpleChat-72B-V4-Apache2.0",
"base_model:quantized:OpenBuddy/SimpleChat-72B-V4-Apache2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-10-12T17:06:11Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
kowzar1/medgemma-27b-it-sft-lora-crc100k | kowzar1 | 2025-10-14T21:45:10Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/medgemma-27b-it",
"base_model:finetune:google/medgemma-27b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-10-14T02:35:10Z | # Model Card for medgemma-27b-it-sft-lora-crc100k
This model is a fine-tuned version of [google/medgemma-27b-it](https://huggingface.co/google/medgemma-27b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a ti... | [] |
LakoMoor/QClaw-4B-GGUF | LakoMoor | 2026-04-24T20:05:36Z | 0 | 2 | null | [
"gguf",
"agent",
"agentic",
"tool-use",
"openclaw",
"qclaw",
"clawbench",
"en",
"base_model:LakoMoor/QClaw-4B",
"base_model:quantized:LakoMoor/QClaw-4B",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-24T19:59:54Z | # QClaw-4B-GGUF

**QClaw-4B-GGUF** is the quantized GGUF version of [LakoMoor/QClaw-4B](https://huggingface.co/LakoMoor/QClaw-4B) — a 4-billion parameter model fine-tuned for agentic tasks and tool use, designed for use with [OpenClaw](https://openclaw.ai)-compatible agent frameworks.
This reposit... | [] |
AmrBelal021/CodeGuard-7B-v1 | AmrBelal021 | 2026-02-06T23:40:42Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-3",
"lora",
"security",
"dpo",
"text-generation",
"conversational",
"region:us"
] | text-generation | 2026-02-06T22:39:09Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AmrBelal021
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4b... | [
{
"start": 16,
"end": 23,
"text": "unsloth",
"label": "training method",
"score": 0.9286932945251465
},
{
"start": 111,
"end": 118,
"text": "unsloth",
"label": "training method",
"score": 0.9166125655174255
},
{
"start": 279,
"end": 286,
"text": "unsloth",... |
ethanCSL/svla_color_test_attack_flip_12 | ethanCSL | 2025-10-01T08:25:10Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:ethanCSL/color_test_attack_flip_12",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-09-30T20:41:18Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
sirineddd/pinterest-stable-diffusion-v1-4 | sirineddd | 2025-08-12T00:41:32Z | 0 | 1 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"text-to-image",
"lora",
"pinterest",
"skincare",
"dataset:sirineddd/pinterest-multimodal-text-to-image",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:mit",
"region:us"
... | text-to-image | 2025-08-12T00:30:58Z | # Pinterest-Style LoRA for Stable Diffusion v1.4
Fine-tuned on a curated dataset of aesthetic Pinterest flatlays, focusing on skincare products, soft lighting, and pastel backgrounds.
## Usage
```python
from diffusers import StableDiffusionPipeline
import torch
pipe = StableDiffusionPipeline.from_pretrained("CompVi... | [] |
Einhorn/Anima-Preview2-Turbo-LoRA | Einhorn | 2026-03-12T10:52:30Z | 0 | 12 | null | [
"base_model:circlestone-labs/Anima",
"base_model:finetune:circlestone-labs/Anima",
"license:unknown",
"region:us"
] | null | 2026-03-12T09:08:01Z | # Anima-Preview2 Turbo-LoRA (14-Step) and Anima-Preview2 Turbo-LoRA (8-Step)
## 🚀 Overview
This is the **14-Step Turbo-LoRA** and **8-Step Turbo-LoRA** for *Anima-Preview2*.
> [!CAUTION]
> This versions are fun and **experimental**.
This "fun version" demonstrates the turbo-training capabilities within the Anima ... | [
{
"start": 17,
"end": 27,
"text": "Turbo-LoRA",
"label": "training method",
"score": 0.7808663249015808
},
{
"start": 57,
"end": 67,
"text": "Turbo-LoRA",
"label": "training method",
"score": 0.771571695804596
},
{
"start": 115,
"end": 125,
"text": "Turbo-... |
W-61/llama3-hh-harmless-qt045-b0p5-20260429-085449 | W-61 | 2026-04-29T18:30:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"new-dpo",
"generated_from_trainer",
"conversational",
"dataset:Anthropic/hh-rlhf",
"base_model:W-61/llama-3-8b-base-sft-hh-harmless-4xh200",
"base_model:finetune:W-61/llama-3-8b-base-sft-hh-harmless-4xh200",
"tex... | text-generation | 2026-04-29T18:21:59Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-3-8b-base-new-dpo-hh-harmless-4xh200-batch-64-s_star-0.4-eta-0.1-q_t-0.45-beta-0p5-20260429-085449
This model is a fine-tun... | [] |
mstyslavity/boulango_random-mlx-fp16 | mstyslavity | 2026-01-11T23:13:32Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"mlx",
"mlx-my-repo",
"base_model:mstyslavity/boulango_random",
"base_model:finetune:mstyslavity/boulango_random",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-11T23:13:12Z | # mstyslavity/boulango_random-mlx-fp16
The Model [mstyslavity/boulango_random-mlx-fp16](https://huggingface.co/mstyslavity/boulango_random-mlx-fp16) was converted to MLX format from [mstyslavity/boulango_random](https://huggingface.co/mstyslavity/boulango_random) using mlx-lm version **0.29.1**.
## Use with mlx
```b... | [] |
lindsay1314537/Fine-R1-3B-Stage2-LoRA | lindsay1314537 | 2026-03-21T11:03:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:lindsay1314537/Fine-R1-3B-Stage1-Merged",
"base_model:finetune:lindsay1314537/Fine-R1-3B-Stage1-Merged",
"endpoints_compatible",
"region:us"
] | null | 2026-03-20T17:59:43Z | # Model Card for Fine-R1-3B-Stage2-LoRA
This model is a fine-tuned version of [lindsay1314537/Fine-R1-3B-Stage1-Merged](https://huggingface.co/lindsay1314537/Fine-R1-3B-Stage1-Merged).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
que... | [] |
xummer/qwen3-8b-gsm8k-lora-th | xummer | 2026-03-12T21:47:42Z | 12 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen3-8B",
"llama-factory",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-8B",
"license:other",
"region:us"
] | text-generation | 2026-03-12T21:47:22Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# th
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the gsm8k_th_train dataset.
It ... | [] |
hungnguyen190204/VieNeu-TTS-0.3B-pre-trained-new-codec | hungnguyen190204 | 2026-03-08T12:01:03Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"lora",
"text-to-speech",
"tts",
"vietnamese",
"vieneu-tts",
"vi",
"base_model:pnnbao-ump/VieNeu-TTS-0.3B",
"base_model:adapter:pnnbao-ump/VieNeu-TTS-0.3B",
"license:cc-by-nc-4.0",
"region:us"
] | text-to-speech | 2026-03-08T11:59:16Z | # 🦜 VieNeu-TTS-LoRA (Ngọc Huyền)
LoRA adapter được fine-tune từ base model **VieNeu-TTS-0.3B**
để huấn luyện giọng đọc **Ngọc Huyền (Vbee)**.
Code finetune VieNeu-TTS tại repo: https://github.com/pnnbao97/VieNeu-TTS
---
## 🔗 Base Model
- Base model: `pnnbao-ump/VieNeu-TTS-0.3B`
- Repo này **chỉ chứa LoRA adapte... | [] |
maxpicy/modernbert-base-emotion-balanced | maxpicy | 2026-04-17T06:10:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"text-classification",
"emotion",
"emotion-classification",
"en",
"dataset:google-research-datasets/go_emotions",
"dataset:dair-ai/emotion",
"dataset:gsri-18/ISEAR-dataset-complete",
"dataset:cardiffnlp/tweet_eval",
"base_model:answerdotai/ModernBER... | text-classification | 2026-04-17T06:03:05Z | # ModernBERT-base — emotion classifier (balanced 6-dataset fine-tune)
Fine-tune of [`answerdotai/ModernBERT-base`](https://huggingface.co/answerdotai/ModernBERT-base) on a per-class **balanced** merge of 6 English emotion datasets, mirroring the methodology of [`j-hartmann/emotion-english-distilroberta-base`](https://... | [] |
JonathanMiddleton/daisy-milli-18d.1.1b | JonathanMiddleton | 2026-03-03T23:05:56Z | 28 | 0 | transformers | [
"transformers",
"safetensors",
"daisy",
"text-generation",
"causal-lm",
"pretrained",
"conversational",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-03T23:00:23Z | # DaisyCore — daisy_milli
## Model Description
DaisyCore transformer with 26 layers, 14 attention heads, and a model dimension of 1,792. Uses block-causal sliding window attention (window size 2,048) with standard attention implementation.
## Architecture
| Property | Value |
|:---|:---|
| Architecture | DaisyCore ... | [] |
LythronAI/lythron-ai | LythronAI | 2025-10-31T20:19:05Z | 0 | 0 | null | [
"assistant",
"ai",
"lythron",
"creative",
"analytical",
"futuristic",
"fastapi",
"es",
"en",
"base_model:LythronAI/lythron-ai",
"base_model:finetune:LythronAI/lythron-ai",
"license:mit",
"region:us"
] | null | 2025-10-31T04:40:30Z | # 🧠 Lythron AI · EVO.3
**Lythron AI** es una inteligencia artificial avanzada desarrollada por **Lythron AI Labs**.
Creada el **30 de octubre de 2025** y lanzada oficialmente el **31 de octubre de 2025**,
Lythron fue diseñada como un asistente **universal, creativo y analítico**, con una identidad **futurista y a... | [] |
DJLougen/MolmoWeb-8B-2bit | DJLougen | 2026-03-25T22:01:12Z | 0 | 0 | transformers | [
"transformers",
"molmo",
"molmo2",
"quantized",
"vision-language-models",
"multimodal",
"image-text-to-text",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-03-25T22:00:21Z | # MolmoWeb-8B Quantized Collection
This repository contains multiple quantized versions of [MolmoWeb-8B](https://huggingface.co/allenai/MolmoWeb-8B) by the Allen Institute.
## Quick Comparison
| Version | Repository | Size | Speed | Quality |
|---------|------------|------|-------|---------|
| **16-bit** | [DJLougen... | [] |
Mano200600/faster-whisper-small-egyptian-ar | Mano200600 | 2026-04-11T10:19:33Z | 0 | 0 | ctranslate2 | [
"ctranslate2",
"faster-whisper",
"whisper",
"arabic",
"egyptian",
"asr",
"speech-to-text",
"automatic-speech-recognition",
"ar",
"dataset:MAdel121/arabic-egy-cleaned",
"base_model:MAdel121/whisper-small-egyptian-arabic",
"base_model:finetune:MAdel121/whisper-small-egyptian-arabic",
"license:... | automatic-speech-recognition | 2026-04-11T08:58:59Z | <meta charset='utf-8'>
<div dir='rtl'>
# Whisper Small Egyptian Arabic (Faster-Whisper)
تخيل إنك بتعمل تطبيق ويب خفيف (Web App) أو برنامج بيشتغل على أجهزة إمكانياتها محدودة، ومحتاج تفهم كلام مصري عامي بسرعة ودقة من غير ما تستهلك موارد الجهاز. الموديل ده متصمم مخصوص عشان يحل المشكلة دي.
دي نسخة متعدلة من موديل Whispe... | [] |
ogulcanaydogan/Turkish-LLM-14B-Instruct | ogulcanaydogan | 2026-03-21T23:04:51Z | 146 | 1 | null | [
"safetensors",
"qwen2",
"turkish",
"instruction-tuned",
"sft",
"qlora",
"tr",
"reasoning",
"conversational",
"low-resource",
"turkish-nlp",
"text-generation",
"en",
"dataset:ogulcanaydogan/Turkish-LLM-v10-Training",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen... | text-generation | 2026-03-04T22:41:12Z | # Turkish-LLM-14B-Instruct
A Turkish-enhanced 14B model fine-tuned from Qwen2.5-14B-Instruct with QLoRA on 242K Turkish instruction examples.
Part of the [Turkish LLM Family](https://huggingface.co/collections/ogulcanaydogan/turkish-llm-family-69b303b4ef1c36caffca4e94).
## Highlights
- **14B parameters** - strong p... | [] |
Outlier-Ai/Outlier-10B | Outlier-Ai | 2026-04-29T02:06:14Z | 1,358 | 2 | transformers | [
"transformers",
"outlier_moe",
"mixture-of-experts",
"moe",
"ternary",
"quantized",
"qwen2.5",
"outlier",
"local-llm",
"on-device",
"edge-ai",
"energy-efficient",
"sparse",
"overlay",
"research",
"apple-silicon",
"mac",
"mmlu-verified",
"text-generation",
"conversational",
"e... | text-generation | 2026-04-04T17:23:39Z | # Outlier-10B V3.3
Ternary Mixture-of-Experts overlay for [Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
Sparse architecture: shared full-precision FFN plus a gated ternary expert FFN per layer.
Built by a solo founder on a Mac Studio as part of the Outlier research line feeding
the [Outlier d... | [] |
buelfhood/irplag_codet5_ep30_bs16_lr3e-05_l512_s42_ppn_loss | buelfhood | 2025-11-16T18:02:36Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"base_model:Salesforce/codet5-small",
"base_model:finetune:Salesforce/codet5-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-16T18:02:13Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# irplag_codet5_ep30_bs16_lr3e-05_l512_s42_ppn_loss
This model is a fine-tuned version of [Salesforce/codet5-small](https://hugging... | [] |
jasonhuang3/207-dpop-llama3-2-3b-instruct-lora-28k | jasonhuang3 | 2026-01-13T06:50:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-01-12T08:06:23Z | # Model Card for 207-dpop-llama3-2-3b-instruct-lora-28k
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
q... | [
{
"start": 215,
"end": 218,
"text": "TRL",
"label": "training method",
"score": 0.7618878483772278
},
{
"start": 987,
"end": 990,
"text": "DPO",
"label": "training method",
"score": 0.7955610156059265
},
{
"start": 1277,
"end": 1280,
"text": "DPO",
"la... |
manancode/opus-mt-ru-ar-ctranslate2-android | manancode | 2025-08-11T18:02:37Z | 0 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] | translation | 2025-08-11T18:02:29Z | # opus-mt-ru-ar-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-ru-ar` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-ru-ar
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by*... | [] |
onecat-ai/LDF-VFI | onecat-ai | 2026-01-23T03:07:35Z | 0 | 3 | diffusers | [
"diffusers",
"safetensors",
"video-frame-interpolation",
"vfi",
"diffusion-transformer",
"image-to-video",
"arxiv:2601.14959",
"license:apache-2.0",
"region:us"
] | image-to-video | 2026-01-13T06:58:42Z | # LDF-VFI: Towards Holistic Modeling for Video Frame Interpolation with Auto-regressive Diffusion Transformers
This repository contains the weights for **LDF-VFI** (Local Diffusion Forcing for Video Frame Interpolation), as introduced in the paper [Towards Holistic Modeling for Video Frame Interpolation with Auto-regr... | [] |
Gowtham2962S/330M_Pretrained_model | Gowtham2962S | 2026-04-14T11:25:07Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2026-04-14T10:54:08Z | This folder consists of the Model training (Text Generation) which uses Transformer architecture as a backbone. I have Trained a 330 Million Parameter
from sctrach from random weight initalization instead of the taking any existing trained weights . for the dataset I have collected from **Fineweb** ,
**Fineweb-edu** an... | [
{
"start": 72,
"end": 96,
"text": "Transformer architecture",
"label": "training method",
"score": 0.7042959928512573
}
] |
Azure99/Blossom-V6.3-36B | Azure99 | 2025-12-06T17:33:32Z | 32 | 2 | null | [
"safetensors",
"seed_oss",
"zh",
"en",
"dataset:Azure99/blossom-v6.3-sft-stage1",
"dataset:Azure99/blossom-v6.3-sft-stage2",
"base_model:ByteDance-Seed/Seed-OSS-36B-Base",
"base_model:finetune:ByteDance-Seed/Seed-OSS-36B-Base",
"license:apache-2.0",
"region:us"
] | null | 2025-12-06T13:07:04Z | # **BLOSSOM-V6.3-36B**
[💻Github](https://github.com/Azure99/BlossomLM) • [🚀Blossom Chat Demo](https://blossom-chat.com/)
### Introduction
Blossom is a powerful open-source conversational large language model that provides reproducible post-training data, dedicated to delivering an open, powerful, and cost-effectiv... | [] |
Smoffyy/Qwen3.6-35B-A3B-Instruct-Pure-GGUF | Smoffyy | 2026-04-17T21:55:28Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"PureGGUF",
"moe",
"image-text-to-text",
"base_model:Qwen/Qwen3.6-35B-A3B",
"base_model:quantized:Qwen/Qwen3.6-35B-A3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2026-04-17T03:08:51Z | # Qwen3.6 35B A3B | Pure GGUF Quantizations
<img width="900px" src="https://cdn-uploads.huggingface.co/production/uploads/648812cd28c3bccafcd68e4c/ss_UB0cu1MkOL0yHKsJXu.png">
Unmodified GGUF quantizations of the official [Qwen/Qwen3.6-35B-A3B](https://huggingface.co/Qwen/Qwen3.6-35B-A3B) model, converted locally usin... | [] |
rbelanec/train_gsm8k_789_1760637939 | rbelanec | 2025-10-19T13:04:35Z | 3 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"llama-factory",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | text-generation | 2025-10-19T09:27:43Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_gsm8k_789_1760637939
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/met... | [] |
DanielDanielDanielDanielDanielDaniel/ModernBERT | DanielDanielDanielDanielDanielDaniel | 2025-10-25T09:07:14Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-10-05T19:44:21Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ModernBERT
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base... | [] |
SHUHI/act_MO_1201 | SHUHI | 2025-12-01T09:15:16Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:SHUHI/record-motest",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-01T09:15:00Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
enguard/tiny-guard-8m-en-response-safety-binary-polyguard | enguard | 2025-11-05T20:44:22Z | 1 | 0 | model2vec | [
"model2vec",
"safetensors",
"static-embeddings",
"text-classification",
"dataset:ToxicityPrompts/PolyGuardMix",
"license:mit",
"region:us"
] | text-classification | 2025-11-01T17:34:44Z | # enguard/tiny-guard-8m-en-response-safety-binary-polyguard
This model is a fine-tuned Model2Vec classifier based on [minishlab/potion-base-8m](https://huggingface.co/minishlab/potion-base-8m) for the response-safety-binary found in the [ToxicityPrompts/PolyGuardMix](https://huggingface.co/datasets/ToxicityPrompts/Pol... | [] |
sindri-de-la-mancha/all-MiniLM-L6-v2-bnb-4bit | sindri-de-la-mancha | 2025-09-23T22:56:35Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"bnb-my-repo",
"sentence-similarity",
"transformers",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search... | sentence-similarity | 2025-09-23T22:56:33Z | # sentence-transformers/all-MiniLM-L6-v2 (Quantized)
## Description
This model is a quantized version of the original model [`sentence-transformers/all-MiniLM-L6-v2`](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2).
It's quantized using the BitsAndBytes library to 4-bit using the [bnb-my-repo](https:/... | [] |
AriRyo/blockpick_gray_walloss_48 | AriRyo | 2026-03-20T03:50:53Z | 30 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"wall_x",
"dataset:AriRyo/blockpick_gray_03",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-20T03:48:39Z | # Model Card for wall_x
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface... | [] |
continuum-ai/olmoe-1b-7b-compacted-5b | continuum-ai | 2026-04-11T17:16:49Z | 90 | 0 | mlx | [
"mlx",
"gguf",
"1b",
"1b-active",
"5b",
"7b",
"allenai",
"android",
"apple-silicon",
"attested",
"calibration-aware-pruning",
"chain-of-custody",
"chinese",
"consumer-gpu",
"cryptographically-verified",
"edge-inference",
"embedded",
"english",
"expert-pruning",
"forge-alloy",
... | text-generation | 2026-04-08T16:36:55Z | # 25% Experts Pruned, 36.0 HUMANEVAL (base 40.9)
**OLMoE-1B-7B-0924-Instruct** compacted via per-layer-normalized MoE expert pruning against the unmodified teacher.
- **HUMANEVAL**: 36.0 (base 40.9, Δ -4.9)
- **HUMANEVAL+PLUS**: 31.7 (base 36.6, Δ -4.9)
<p align="center">
<a href="https://cambriantech.github.io/for... | [] |
Medyassino/qwen-7b-iset-v2-combined | Medyassino | 2026-03-08T10:03:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-03-08T09:15:45Z | # Model Card for qwen-7b-iset-v2-combined
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time m... | [] |
MauoSama/dpvit_mesh_cut_unity | MauoSama | 2025-12-03T01:58:52Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"diffusion",
"dataset:MauoSama/mesh_cut_unity",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-03T01:58:42Z | # Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has ... | [] |
Blubbe/M_Alan_01 | Blubbe | 2025-11-03T13:05:17Z | 10 | 0 | null | [
"gguf",
"llama",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-11-03T11:31:22Z | # M_Alan_01 - GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: **llama-cli** **--hf** repo_id/model_name **-p** "why is the sky blue?"
- For multimodal models: **llama-mtmd-cli** **-m** model_name.gguf **--mmp... | [] |
saiteki-kai/QA-DeBERTa-v3-large-threshold-SEP | saiteki-kai | 2025-11-16T21:50:41Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"multi-label",
"question-answering",
"generated_from_trainer",
"dataset:beavertails",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"model-index",
"text-embeddings-... | text-classification | 2025-11-16T19:25:48Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# QA-DeBERTa-v3-large-threshold-SEP
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/micro... | [] |
AfriScience-MT/gemma_3_4b_it-lora-r8-hau-eng | AfriScience-MT | 2026-02-06T14:52:43Z | 0 | 0 | peft | [
"peft",
"safetensors",
"translation",
"african-languages",
"scientific-translation",
"afriscience-mt",
"lora",
"gemma",
"ha",
"en",
"base_model:google/gemma-3-4b-it",
"base_model:adapter:google/gemma-3-4b-it",
"license:apache-2.0",
"model-index",
"region:us"
] | translation | 2026-02-06T14:52:34Z | # gemma_3_4b_it-lora-r8-hau-eng
[](https://huggingface.co/AfriScience-MT/gemma_3_4b_it-lora-r8-hau-eng)
This is a **LoRA adapter** for the AfriScience-MT project, enabling efficient scientific machine translation for African... | [
{
"start": 212,
"end": 216,
"text": "LoRA",
"label": "training method",
"score": 0.7573034167289734
},
{
"start": 540,
"end": 544,
"text": "LoRA",
"label": "training method",
"score": 0.7262114882469177
},
{
"start": 566,
"end": 570,
"text": "LoRA",
"l... |
zhuojing-huang/gpt2-dutch20k-english10k-configA-13-100M | zhuojing-huang | 2026-01-29T07:52:31Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-29T07:03:17Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-dutch20k-english10k-configA-13-100M
This model was trained from scratch on the None dataset.
## Model description
More inf... | [] |
Gwaldo/distilhubert-finetuned-gtzan | Gwaldo | 2025-09-11T20:21:17Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2025-09-11T18:13:39Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distil... | [] |
faris27/indobert-hoax-detection | faris27 | 2025-08-20T12:21:46Z | 33 | 1 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"indobert",
"indonesian",
"hoax-detection",
"id",
"dataset:mochamadabdulazis/deteksi-berita-hoaks-indo-dataset",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-08-20T05:58:38Z | # IndoBERT - Deteksi Hoaks Berita Indonesia
## Deskripsi Model
Model ini adalah versi *fine-tuned* dari `indobenchmark/indobert-base-p1` yang dilatih secara spesifik untuk tugas klasifikasi teks pada berita berbahasa Indonesia. Tujuannya adalah untuk mengklasifikasikan sebuah artikel berita ke dalam dua kategori: **Fa... | [] |
Jeanronu/lr4.542834124075346e-06_bs128 | Jeanronu | 2026-02-26T06:47:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-02-26T06:41:39Z | # Model Card for lr4.542834124075346e-06_bs128
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a t... | [] |
huichuno/llama3.2-3b | huichuno | 2025-11-04T23:34:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"arxiv:2204.05149",
"arxiv:2405.16406",
"license:llama3.2",
"text-generation-inference",
"endpoints_compati... | text-generation | 2025-11-04T23:15:47Z | ## Model Information
The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic r... | [] |
JJcs17/Qwen2.5-Coder-32B-Instruct-128k | JJcs17 | 2026-04-05T19:17:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"conversational",
"en",
"arxiv:2409.12186",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-Coder-32B",
"base_model:finetune:Qwen/Qwen2.5-Coder-32B",
"license... | text-generation | 2026-04-05T19:15:30Z | # Qwen2.5-Coder-32B-Instruct
<a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Introduction
Qwen2.5-Coder is the latest series of ... | [] |
MarouaneSanhaji/hf_course_trainer | MarouaneSanhaji | 2025-10-08T14:26:02Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-10-08T14:21:45Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hf_course_trainer
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unkno... | [] |
introvoyz041/Nemotron-Cascade-8B-mlx-4Bit | introvoyz041 | 2025-12-19T07:50:09Z | 17 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"nvidia",
"Nemotron-Cascade",
"reasoning",
"general-purpose",
"SFT",
"RL",
"pytorch",
"mlx",
"mlx-my-repo",
"conversational",
"en",
"base_model:nvidia/Nemotron-Cascade-8B",
"base_model:quantized:nvidia/Nemotron-Cascade-8B",
... | text-generation | 2025-12-19T07:49:32Z | # introvoyz041/Nemotron-Cascade-8B-mlx-4Bit
The Model [introvoyz041/Nemotron-Cascade-8B-mlx-4Bit](https://huggingface.co/introvoyz041/Nemotron-Cascade-8B-mlx-4Bit) was converted to MLX format from [nvidia/Nemotron-Cascade-8B](https://huggingface.co/nvidia/Nemotron-Cascade-8B) using mlx-lm version **0.28.3**.
## Use w... | [] |
tremart/act_test31 | tremart | 2026-03-20T09:40:07Z | 25 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:EugeneBerkeley/act0318test2",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-20T09:39:44Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
pcr2120/shesha-geometry | pcr2120 | 2026-01-23T03:15:11Z | 0 | 0 | shesha-geometry | [
"shesha-geometry",
"arxiv:2601.09173",
"geometric-stability",
"representational-learning",
"ai-safety",
"drift",
"constitutional-ai",
"steering",
"interpretability",
"computational-biology",
"other",
"license:mit",
"region:us"
] | other | 2026-01-15T04:52:22Z | # Shesha: Geometric Stability Metric
This is the official Hugging Face hub for the **Shesha** geometric stability metric, as presented in the paper [Geometric Stability: The Missing Axis of Representations](https://huggingface.co/papers/2601.09173).
## Overview
Analysis of learned representations typically focuses o... | [] |
wok2fis/act_record-test-v60 | wok2fis | 2026-02-09T19:59:24Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:wok2fis/record-test-v60",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-06T01:26:25Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
Bawil/neuro-ai | Bawil | 2026-02-20T02:34:25Z | 0 | 0 | tensorflow | [
"tensorflow",
"medical-imaging",
"image-segmentation",
"white-matter-hyperintensities",
"mri",
"flair",
"deep-learning",
"keras",
"neurology",
"multiple-sclerosis",
"dataset:custom",
"dataset:msseg2016",
"license:mit",
"region:us"
] | image-segmentation | 2026-02-08T07:50:08Z | # **Neuro-AI: AI-Driven MS Lesion Analysis Framework**
# Ventricles & WMH Segmentation:
Pre-trained models for **ventricles and white matter hyperintensity (WMH) segmentation** with explicit distinction between normal periventricular changes (normal WMH) and pathological lesions (abnormal WMH).
## Model Description
... | [] |
mradermacher/Tankie-DPE-12B-SFT-v2-i1-GGUF | mradermacher | 2026-02-11T15:00:17Z | 51 | 1 | transformers | [
"transformers",
"gguf",
"character-training",
"communism",
"marxism",
"en",
"dataset:WokeAI/polititune-tankie-warmup-3",
"base_model:WokeAI/Tankie-DPE-12B-SFT-v2",
"base_model:quantized:WokeAI/Tankie-DPE-12B-SFT-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"co... | null | 2026-02-11T11:08:15Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
majentik/Qwen3.6-35B-A3B-TurboQuant-MLX-MXFP4 | majentik | 2026-04-21T06:26:26Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3_5_moe",
"qwen",
"qwen-3.6",
"moe",
"turbo",
"mxfp4",
"apple-silicon",
"image-text-to-text",
"conversational",
"base_model:Qwen/Qwen3.6-35B-A3B",
"base_model:quantized:Qwen/Qwen3.6-35B-A3B",
"license:apache-2.0",
"4-bit",
"region:us"
] | image-text-to-text | 2026-04-21T06:26:08Z | # Qwen3.6-35B-A3B-TurboQuant-MLX-MXFP4
## Summary
TurboQuant + MLX-MXFP4 (4-bit) variant of
[Qwen/Qwen3.6-35B-A3B](https://huggingface.co/Qwen/Qwen3.6-35B-A3B).
## Why this variant
Apple Silicon (M1/M2/M3/M4) with TurboQuant structural pre-conditioning and MLX-native MXFP4 layout (E2M1 weights, per-32-element E8M0 ... | [] |
ooeoeo/opus-mt-bcl-fr-ct2-float16 | ooeoeo | 2026-04-17T11:34:56Z | 0 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"custom",
"license:apache-2.0",
"region:us"
] | translation | 2026-04-17T11:34:51Z | # ooeoeo/opus-mt-bcl-fr-ct2-float16
CTranslate2 float16 quantized version of `Helsinki-NLP/opus-mt-bcl-fr`.
Converted for use in the [ooeoeo](https://ooeoeo.com) desktop engine
with the `opus-mt-server` inference runtime.
## Source
- Upstream model: [Helsinki-NLP/opus-mt-bcl-fr](https://huggingface.co/Helsinki-NLP/... | [] |
thucdangvan020999/polyglot-lion-0.6b-4bit | thucdangvan020999 | 2026-04-13T12:02:58Z | 0 | 0 | mlx-audio | [
"mlx-audio",
"safetensors",
"qwen3_asr",
"singapore",
"multilingual",
"audio",
"mlx",
"speech-to-text",
"speech",
"transcription",
"asr",
"stt",
"automatic-speech-recognition",
"en",
"zh",
"ms",
"ta",
"dataset:knoveleng/cv-mandarin",
"dataset:knoveleng/aishell1-mandarin",
"data... | automatic-speech-recognition | 2026-04-13T12:02:17Z | # thucdangvan020999/polyglot-lion-0.6b-4bit
This model was converted to MLX format from [`knoveleng/polyglot-lion-0.6b`](https://huggingface.co/knoveleng/polyglot-lion-0.6b) using mlx-audio version **0.4.3**.
Refer to the [original model card](https://huggingface.co/knoveleng/polyglot-lion-0.6b) for more details on t... | [] |
kmnlenox/zera-tts | kmnlenox | 2026-04-03T22:35:17Z | 0 | 0 | null | [
"pytorch",
"safetensors",
"vits",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"region:us"
] | text-to-speech | 2026-04-03T22:35:17Z | ---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Albanian Text-to-Speech
This repository contains the **Albanian (sqi)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.o... | [
{
"start": 1851,
"end": 1871,
"text": "adversarial training",
"label": "training method",
"score": 0.7785613536834717
}
] |
BrahmAI/superbpe-brahmai-65k-ev | BrahmAI | 2026-04-16T12:35:37Z | 0 | 0 | null | [
"tokenizer",
"bpe",
"superbpe",
"byte-level",
"ev",
"iot",
"smarthome",
"edge",
"embedded",
"en",
"code",
"arxiv:2503.13423",
"license:apache-2.0",
"region:us"
] | null | 2026-04-16T12:32:41Z | # BrahmAI/superbpe-brahmai-65k-ev
**BrahmAI SuperBPE v5-EV Tokenizer** — Byte-level BPE with two-phase SuperBPE training.
Specialised for EV / IoT / Smart Home / Edge domains.
| Property | Value |
|---|---|
| **Vocab size** | 65,536 (65,536 (2¹⁶)) |
| **Phase 1 vocab** | 45,000 (subword BPE, whitespace-aware) |
| **... | [] |
enguard/tiny-guard-4m-en-general-politeness-binary-intel | enguard | 2025-11-05T19:57:59Z | 5 | 0 | model2vec | [
"model2vec",
"safetensors",
"static-embeddings",
"text-classification",
"dataset:Intel/polite-guard",
"license:mit",
"region:us"
] | text-classification | 2025-11-01T17:19:35Z | # enguard/tiny-guard-4m-en-general-politeness-binary-intel
This model is a fine-tuned Model2Vec classifier based on [minishlab/potion-base-4m](https://huggingface.co/minishlab/potion-base-4m) for the general-politeness-binary found in the [Intel/polite-guard](https://huggingface.co/datasets/Intel/polite-guard) dataset... | [] |
anthracite-org/magnum-v4-123b | anthracite-org | 2024-11-25T19:32:02Z | 139 | 32 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"chat",
"conversational",
"en",
"dataset:anthracite-org/c2_logs_16k_mistral-large_v1.2",
"dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal",
"dataset:lodrick-the-lafted/kalo-opus-instruct-3k-filtered",
"dataset:anthracite-org/n... | text-generation | 2024-09-27T00:25:27Z | 
This is a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus.
This model is fine-tuned on top of [mistralai/Mistral-Large-Instruct-2407](... | [] |
kamaboko2007/LLM_main_003_DPO | kamaboko2007 | 2026-02-06T08:38:25Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"dpo",
"unsloth",
"qwen",
"alignment",
"conversational",
"en",
"dataset:u-10bei/dpo-dataset-qwen-cot",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"text-gener... | text-generation | 2026-02-06T08:35:46Z | # Qwen3-4B-Instruct-DPO-CoT
This model is a fine-tuned version of **Qwen/Qwen3-4B-Instruct-2507** using **Direct Preference Optimization (DPO)** via the **Unsloth** library.
This repository contains the **full-merged 16-bit weights**. No adapter loading is required.
## Training Objective
This model has been optimize... | [
{
"start": 107,
"end": 137,
"text": "Direct Preference Optimization",
"label": "training method",
"score": 0.8484110236167908
},
{
"start": 139,
"end": 142,
"text": "DPO",
"label": "training method",
"score": 0.8702206611633301
},
{
"start": 328,
"end": 331,
... |
sarvansh/NotUrFace-AI | sarvansh | 2025-02-25T16:18:35Z | 31 | 1 | keras | [
"keras",
"region:us"
] | null | 2024-12-21T09:08:16Z | # NotUrFace-AI: Deepfake Detection Model
## Model Details
### Model Description
NotUrFace-AI is a deepfake detection model designed to classify video content as real or fake. It processes first 30-50 video frames using **TensorFlow** and applies advanced machine learning techniques to identify synthetic or manipulat... | [
{
"start": 224,
"end": 234,
"text": "TensorFlow",
"label": "training method",
"score": 0.7259247303009033
}
] |
AgPerry/Qwen2.5-Coder-7B-Instruct-num05-accumulate_16 | AgPerry | 2026-03-18T08:02:57Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compatib... | text-generation | 2026-03-18T08:00:49Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2.5-Coder-7B-Instruct-num05
This model is a fine-tuned version of [/mmu-vcg-hdd/multimodal/models/Qwen2.5-Coder-7B-Instruct](... | [] |
XXHStudyHard/EnvScaler-Qwen3-8B | XXHStudyHard | 2026-01-15T03:22:24Z | 33 | 0 | null | [
"safetensors",
"qwen3",
"arxiv:2601.05808",
"license:apache-2.0",
"region:us"
] | null | 2026-01-08T15:44:56Z | # EnvScaler-Qwen3-8B
## Model Description
**EnvScaler-Qwen3-8B** is a tool-enhanced language model based on Qwen3-8B (Thinking Mode), trained using the [EnvScaler](https://github.com/RUC-NLPIR/EnvScaler) framework for tool-interactive agent tasks. This model has been trained through **Supervised Fine-Tuning (SFT)** f... | [] |
kaitchup/Qwen3-1.7B-calib-OpenR1-Math-220k-16klen-NVFP4 | kaitchup | 2025-09-08T07:55:40Z | 1 | 0 | null | [
"safetensors",
"qwen3",
"llm-compressor",
"dataset:open-r1/OpenR1-Math-220k",
"base_model:Qwen/Qwen3-1.7B",
"base_model:quantized:Qwen/Qwen3-1.7B",
"license:apache-2.0",
"compressed-tensors",
"region:us"
] | null | 2025-09-08T07:40:53Z | This is [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) quantized with [LLM Compressor](https://github.com/vllm-project/llm-compressor) in 4-bit (NVFP4), weights and activations.
The calibration step used 512 samples of 16000 tokens, chat template applied, from [open-r1/OpenR1-Math-220k](https://huggingface.... | [] |
akseljoonas/qwen3-1.7b-sft-lr1e-5 | akseljoonas | 2026-02-25T16:27:22Z | 40 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"hf_jobs",
"sft",
"trackio",
"trackio:https://huggingface.co/spaces/akseljoonas/trackio",
"conversational",
"base_model:Qwen/Qwen3-1.7B-Base",
"base_model:finetune:Qwen/Qwen3-1.7B-Base",
"text-gener... | text-generation | 2026-02-25T16:04:25Z | # Model Card for qwen3-1.7b-sft-lr1e-5
This model is a fine-tuned version of [Qwen/Qwen3-1.7B-Base](https://huggingface.co/Qwen/Qwen3-1.7B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but... | [] |
Thireus/Kimi-K2.5-THIREUS-Q4_1-SPECIAL_SPLIT | Thireus | 2026-03-30T07:00:49Z | 0 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-03-30T06:10:40Z | # Kimi-K2.5
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/Kimi-K2.5-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the Kimi-K2.5 model (official repo: https://huggingface.co/moonshotai/Kimi-K2.5). These GGUF shards are designed to be used with ... | [] |
seeklhy/OmniSQL-32B | seeklhy | 2025-03-06T07:13:59Z | 2,152 | 14 | null | [
"safetensors",
"qwen2",
"Text-to-SQL",
"SQL",
"NL2SQL",
"Text2SQL",
"en",
"dataset:seeklhy/SynSQL-2.5M",
"arxiv:2503.02240",
"license:apache-2.0",
"region:us"
] | null | 2025-03-06T05:20:36Z | # OmniSQL - Synthesizing High-quality Text-to-SQL Data at Scale
## Introduction
We present an automatic and scalable text-to-SQL data synthesis framework, illustrated below:
<p align="center">
<img src="framework.png" alt="Description" style="width: 100%; max-width: 600px;"/>
</p>
Based on this framework, we introd... | [] |
artificialguybr/CROCHET-AMIGURUMI-REDMOND-QWENIMAGE | artificialguybr | 2026-02-26T00:58:54Z | 10 | 1 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:Qwen/Qwen-Image-2512",
"base_model:adapter:Qwen/Qwen-Image-2512",
"license:apache-2.0",
"region:us"
] | text-to-image | 2026-02-26T00:56:09Z | # Crochet Amigurumi REDMOND LORA is here!
<Gallery />
## Model description
#Crochet Amigurumi REDMOND LORA is here!
I'm grateful for the GPU time from [Redmond.AI](https://redmond.ai/) that allowed me to make this model!
This LoRA was trained on Crochet Amigurumi style images. It generates hig... | [] |
srinivasbilla/orpheus-pretrained-3b | srinivasbilla | 2025-09-04T09:51:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-to-speech",
"en",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-to-speech | 2025-09-03T19:48:26Z | # Orpheus 3B 0.1 Pretrained
**03/18/2025** – We are releasing our 3B Orpheus TTS model with additional finetunes. Code is available on GitHub: [CanopyAI/Orpheus-TTS](https://github.com/canopyai/Orpheus-TTS)
---
Orpheus TTS is a state-of-the-art, Llama-based Speech-LLM designed for high-quality, empathetic text-to-sp... | [
{
"start": 765,
"end": 788,
"text": "Zero-Shot Voice Cloning",
"label": "training method",
"score": 0.8399398922920227
}
] |
ButterChicken98/soy_bact_pustule_depth_controlnet_v1 | ButterChicken98 | 2026-01-22T02:19:03Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"base_model:ButterChicken98/dec_logs_bact_v4_balanced",
"base_model:adapter:ButterChicken98/dec_logs_bact_v4_balanced",
"license:creativeml-openrail-m",
"region:us"
... | text-to-image | 2026-01-21T14:45:17Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-ButterChicken98/soy_bact_pustule_depth_controlnet_v1
These are controlnet weights trained on ButterChicken98/... | [] |
sapirrior/octopus-26.0.4 | sapirrior | 2026-03-19T07:29:02Z | 136 | 2 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"ai-security",
"prompt-injection",
"safety",
"guardrail",
"generated_from_trainer",
"en",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"mo... | text-classification | 2026-03-19T06:19:01Z | # Octopus-26.0.4
**Model Card — Prompt Injection Classifier**
Developer: Nolan Stark · Architecture: DistilBERT Base Uncased · Version: 26.0.4
---
## Model Overview
`octopus-26.0.4` is a binary text classifier fine-tuned for AI security guardrail applications. Its primary function is prompt injection detection — ... | [] |
saravananduraiarasan/dp_duckcandy2cBatch16 | saravananduraiarasan | 2026-02-11T08:35:08Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:saravananduraiarasan/recordtestduckcandy2c",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-11T08:34:57Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
madebyollin/taef1 | madebyollin | 2025-12-23T17:01:34Z | 2,687 | 51 | diffusers | [
"diffusers",
"safetensors",
"license:mit",
"region:us"
] | null | 2024-08-10T07:25:27Z | # 🍰 Tiny AutoEncoder for FLUX.1
[TAEF1](https://github.com/madebyollin/taesd) is very tiny autoencoder which uses the same "latent API" as FLUX.1's VAE.
FLUX.1 is useful for real-time previewing of the FLUX.1 generation process.
This repo contains `.safetensors` versions of the TAEF1 weights.
## Using in 🧨 diffuse... | [] |
anshumanatrey/pharmarl-llama-3b-trained-anshuman | anshumanatrey | 2026-04-29T10:42:11Z | 15 | 1 | peft | [
"peft",
"safetensors",
"drug-discovery",
"molecular-design",
"reinforcement-learning",
"grpo",
"lora",
"openenv",
"pharmarl",
"text-generation",
"conversational",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"re... | text-generation | 2026-04-26T11:08:22Z | # PharmaRL — Llama-3.2-3B-Instruct trained via GRPO
LoRA adapter trained on top of `meta-llama/Llama-3.2-3B-Instruct` using GRPO (Group Relative Policy Optimization) inside the **PharmaRL** OpenEnv-native chemistry environment.
The model learns to design drug-like molecules step by step by emitting JSON molecular edi... | [
{
"start": 47,
"end": 51,
"text": "GRPO",
"label": "training method",
"score": 0.8091891407966614
},
{
"start": 125,
"end": 129,
"text": "GRPO",
"label": "training method",
"score": 0.8493955135345459
},
{
"start": 1081,
"end": 1085,
"text": "GRPO",
"l... |
squeeker/Qwen3-TTS-12Hz-17B-Base | squeeker | 2026-04-08T15:19:52Z | 18 | 0 | null | [
"safetensors",
"qwen3_tts",
"arxiv:2601.15621",
"license:apache-2.0",
"region:us"
] | null | 2026-04-08T15:19:51Z | # Qwen3-TTS
## Overview
### Introduction
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-TTS-Repo/qwen3_tts_introduction.png" width="90%"/>
<p>
Qwen3-TTS covers 10 major languages (Chinese, English, Japanese, Korean, German, French, Russian, Portuguese, Spanish, and Italian) as... | [] |
PremRajZcoder/speecht5_finetuned_emirhan_tr | PremRajZcoder | 2025-12-31T18:56:13Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2025-12-31T17:54:03Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_emirhan_tr
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/spe... | [] |
LetheanNetwork/lemrd | LetheanNetwork | 2026-04-12T07:07:34Z | 593 | 0 | gguf | [
"gguf",
"gemma4",
"safetensors",
"llama.cpp",
"ollama",
"multimodal",
"image-text-to-text",
"conversational",
"base_model:google/gemma-4-31B-it",
"base_model:quantized:google/gemma-4-31B-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-04-07T19:20:03Z | <div align="center">
<img src=https://ai.google.dev/gemma/images/gemma4_banner.png>
</div>
<p align="center">
<a href="https://huggingface.co/collections/google/gemma-4" target="_blank">Hugging Face</a> |
<a href="https://github.com/google-gemma" target="_blank">GitHub</a> |
<a href="https://blog.google... | [] |
ivanleomk/reverse-chinese-text | ivanleomk | 2025-12-17T05:54:25Z | 3 | 0 | null | [
"safetensors",
"qwen3",
"text-generation",
"chinese",
"sft",
"conversational",
"zh",
"dataset:ivanleomk/reverse-chinese-poems",
"base_model:PrimeIntellect/Qwen3-0.6B",
"base_model:finetune:PrimeIntellect/Qwen3-0.6B",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-12-17T05:48:22Z | # Reverse Chinese Text (SFT)
This model is a fine-tuned version of [PrimeIntellect/Qwen3-0.6B](https://huggingface.co/PrimeIntellect/Qwen3-0.6B) trained on the task of reversing Chinese text character-by-character.
## Training
- **Base Model:** PrimeIntellect/Qwen3-0.6B
- **Method:** Supervised Fine-Tuning (SFT)
- *... | [] |
CiroN2022/digital-human-sdxl-v10 | CiroN2022 | 2026-04-17T08:58:28Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2026-04-17T08:53:21Z | # Digital Human SDXL v1.0
## 📝 Descrizione
Introducing Digital Human Model: Transforming Characters with a Digital Aesthetic
Digital Human Model, is specifically designed to provide a distinct 3d digital look.
## ⚙️ Dati Tecnici
* **Tipo**: LORA
* **Base**: SDXL 1.0
* **Trigger Words**: `Nessuno`
## ... | [] |
NourFakih/LSTM-64win-Keystrokes | NourFakih | 2026-01-13T14:40:49Z | 0 | 0 | pytorch | [
"pytorch",
"joblib",
"safetensors",
"keystroke-dynamics",
"lstm",
"cybersecurity",
"hid",
"license:mit",
"region:us"
] | null | 2026-01-13T14:40:46Z | # LSTM-64win-Keystrokes
## Summary
This repository contains a PyTorch **LSTM** classifier for **Human vs HID** keystroke control detection using **windowed** timing features.
The label for each window is the **last keystroke** label in that window.
## Training setup (as implemented)
- **Window size:** 64
- **Stride:*... | [] |
akritidhasmana/wav2vec2-large-xls-r-300m-gh-colab | akritidhasmana | 2025-09-20T04:35:20Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-09-20T03:15:26Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-gh-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/fa... | [] |
ferrazzipietro/unsup-ModernBERT-base | ferrazzipietro | 2026-04-16T17:18:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"fill-mask",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2026-04-16T16:51:50Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# unsup-ModernBERT-base
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/Mode... | [] |
pessini/Tucano2-qwen-1.5B-Instruct-MLX-4bit | pessini | 2026-03-13T21:20:21Z | 32 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3",
"4-bit",
"quantized",
"portuguese",
"tucano2",
"pt",
"arxiv:2603.03543",
"base_model:Polygl0t/Tucano2-qwen-1.5B-Instruct",
"base_model:quantized:Polygl0t/Tucano2-qwen-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2026-03-12T21:55:52Z | # Tucano2-qwen-1.5B-Instruct (MLX 4-bit)
This is a **4-bit quantized [MLX](https://github.com/ml-explore/mlx) version** of [Polygl0t/Tucano2-qwen-1.5B-Instruct](https://huggingface.co/Polygl0t/Tucano2-qwen-1.5B-Instruct), optimized for efficient on-device inference on Apple Silicon.
---
**Este é uma versão quantizad... | [] |
sweatSmile/Qwen3-4B-Dolly-Instruct | sweatSmile | 2025-08-30T15:33:44Z | 1 | 1 | null | [
"safetensors",
"qwen3",
"region:us"
] | null | 2025-08-30T01:07:50Z | # Qwen3-4B-Dolly-1k
A fine-tuned version of Qwen3-4B optimized for instruction following and conversational AI tasks. This model was trained on a subset of the Databricks Dolly-15k dataset using parameter-efficient fine-tuning techniques.
## Model Details
### Base Model
- **Model**: Qwen3-4B (4 billion parameters)
-... | [
{
"start": 463,
"end": 479,
"text": "LoRA fine-tuning",
"label": "training method",
"score": 0.7874398827552795
}
] |
sjoe1244/gemma-4-26B-A4B-it-ultra-uncensored-heretic-exl3-4.00bpw-h6 | sjoe1244 | 2026-05-01T23:44:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma4",
"image-text-to-text",
"heretic",
"uncensored",
"decensored",
"abliterated",
"ara",
"exl3",
"exllamav3",
"quantized",
"4bit",
"conversational",
"base_model:google/gemma-4-26B-A4B-it",
"base_model:quantized:google/gemma-4-26B-A4B-it",
"license:a... | image-text-to-text | 2026-05-01T23:40:54Z | # EXL3 4.00bpw h6 export
This repository is an unofficial EXL3 export of
[`llmfan46/gemma-4-26B-A4B-it-ultra-uncensored-heretic`](https://huggingface.co/llmfan46/gemma-4-26B-A4B-it-ultra-uncensored-heretic),
prepared for local ExLlamaV3/TabbyAPI serving on 24 GB GPUs.
- Format: EXL3
- Quantization target: 4.00 bpw, h... | [] |
Muapi/gothic-grain | Muapi | 2025-08-22T21:40:17Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-22T21:40:01Z | # Gothic Grain

**Base model**: Flux.1 D
**Trained words**:
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "applicat... | [] |
weichih8888/Gemma-4-31B-JANG_4M-CRACK | weichih8888 | 2026-04-16T00:32:52Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"gemma4",
"abliterated",
"uncensored",
"crack",
"jang",
"image-text-to-text",
"conversational",
"license:gemma",
"region:us"
] | image-text-to-text | 2026-04-16T00:32:52Z | <p align="center">
<img src="vmlx-banner.png" alt="vMLX" width="600"/>
</p>
<p align="center">
<img src="dealign_logo.png" alt="dealign.ai" width="200"/>
</p>
<div align="center">
<img src="dealign_mascot.png" width="128" />
# Gemma 4 31B JANG_4M CRACK (v2)
**Abliterated Gemma 4 31B Dense — 60 layers, hybrid sl... | [] |
ling1000T/DeepSeek-V3.1-Terminus-gguf | ling1000T | 2025-11-01T11:06:06Z | 83 | 2 | null | [
"gguf",
"base_model:deepseek-ai/DeepSeek-V3.1-Terminus",
"base_model:quantized:deepseek-ai/DeepSeek-V3.1-Terminus",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-10-31T01:15:57Z | # DeepSeek-V3.1-Terminus-gguf
This is a new model from DeepSeek.
Make sure you have enough ram/gpu to run. On the right of model card, you may see the size of each quantized models.
The cheapest hardware to run full DeepSeek model is using apple Mac Studio, which can have 512 GB ram/ 9500 dollars, Or 256 GB/5500 doll... | [] |
Stormtrooperaim/UltraThinker-1.7b-Q8_0-GGUF | Stormtrooperaim | 2026-02-22T02:54:16Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"llama-cpp",
"gguf-my-repo",
"en",
"es",
"ja",
"dataset:Stormtrooperaim/Ultra-Thinker-30k",
"base_model:Stormtrooperaim/UltraThinker-1.7b",
"base_model:quantized:Stormtrooperaim/UltraThinker-1.7b",
"license:apache-2.0... | null | 2026-02-22T02:54:05Z | # Stormtrooperaim/UltraThinker-1.7b-Q8_0-GGUF
This model was converted to GGUF format from [`Stormtrooperaim/UltraThinker-1.7b`](https://huggingface.co/Stormtrooperaim/UltraThinker-1.7b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original m... | [] |
VuNiti/VuMos-4B-Thinking-Vision | VuNiti | 2026-03-06T23:55:42Z | 0 | 0 | vumos | [
"vumos",
"vura",
"vuniti",
"license:other",
"region:us"
] | null | 2026-03-06T20:54:39Z | # VuMos-4B-Thinking: Intelligence with Warmth

# [vuniti.com](https://vuniti.com)
> **The warmth of understanding, the height of your success.**
### 🌟 About VuMos & .vum Format
VuMos is a next-generation series of encrypted models designed by **VuNiti**. This specific model, encapsu... | [] |
addinda/cendol-mt5-id-mad-15ep | addinda | 2025-12-13T07:10:37Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"machine-translation",
"indonesian",
"madurese",
"low-resource",
"nlp",
"id",
"mad",
"dataset:nusax",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-13T06:59:21Z | # Cendol-mT5 ID–MAD (15 Epochs)
## 📌 Overview
**Cendol-mT5 ID–MAD (15ep)** adalah model *machine translation* berbasis **mT5-small** yang telah di-*fine-tune* untuk menerjemahkan **Bahasa Indonesia ↔ Bahasa Madura** secara dua arah (*bidirectional translation*).
Model ini ditujukan untuk:
- penelitian NLP bahasa dae... | [] |
mradermacher/glm4.1v-9b-base-sft-i1-GGUF | mradermacher | 2025-12-25T13:35:51Z | 24 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"glm4v",
"en",
"base_model:fremko/glm4.1v-9b-base-sft",
"base_model:quantized:fremko/glm4.1v-9b-base-sft",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-10-17T04:38:10Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
4everStudent/Qwen3-4B-lr-1e-5-parsinv2 | 4everStudent | 2025-09-28T20:07:06Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-24T13:10:27Z | # Model Card for Qwen3-4B-lr-1e-5-parsinv2
This model is a fine-tuned version of [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could onl... | [
{
"start": 926,
"end": 930,
"text": "GRPO",
"label": "training method",
"score": 0.7328445911407471
},
{
"start": 1221,
"end": 1225,
"text": "GRPO",
"label": "training method",
"score": 0.7243196368217468
}
] |
2264K/dntf-architecture | 2264K | 2026-04-01T01:37:18Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-03-31T23:15:25Z | # DeltaLens: Selective Reading from Compressed Memory via Cross-Attention
DeltaLens replaces linear attention's read operation with cross-attention over the compressed state matrix. While existing DeltaNet variants (Gated DeltaNet, KDA, DeltaProduct) focus on improving the **write** mechanism, the **read** remains a s... | [] |
dpabonc/TinyLlama-1.1B-Chat-v1.0-sft-dpo | dpabonc | 2025-08-24T22:12:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-24T22:03:47Z | # Model Card for TinyLlama_TinyLlama-1.1B-Chat-v1.0-sft-dpo
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only... | [
{
"start": 163,
"end": 166,
"text": "TRL",
"label": "training method",
"score": 0.7602521777153015
},
{
"start": 674,
"end": 677,
"text": "DPO",
"label": "training method",
"score": 0.8870606422424316
},
{
"start": 970,
"end": 973,
"text": "DPO",
"labe... |
BAAI/Infinity-Instruct-7M-Gen-mistral-7B | BAAI | 2024-08-11T09:34:59Z | 216 | 7 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:BAAI/Infinity-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"deploy:azure",
"region:us"
] | text-generation | 2024-07-25T05:01:13Z | # Infinity Instruct
<p align="center">
<img src="fig/Bk3NbjnJko51MTx1ZCScT2sqnGg.png" width="300">
</p>
<p align="center">
<em>Beijing Academy of Artificial Intelligence (BAAI)</em><br/>
<em>[Paper][Code][🤗] (would be released soon)</em>
</p>
Infinity-Instruct-7M-Gen-Mistral-7B is an opensource supervised instructio... | [] |
OpenKing/Gemma-270m-it-non-gated | OpenKing | 2025-10-20T15:13:23Z | 15 | 1 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"gemma3",
"gemma",
"google",
"conversational",
"arxiv:2503.19786",
"arxiv:1905.07830",
"arxiv:1905.10044",
"arxiv:1911.11641",
"arxiv:1705.03551",
"arxiv:1911.01547",
"arxiv:1907.10641",
"arxiv:2311.07911",
"arxiv:2311.... | text-generation | 2025-10-20T15:11:46Z | # Gemma 3 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core)
**Resources and Technical Documentation**:
* [Gemma 3 Technical Report][g3-tech-report]
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma3]
**Terms ... | [] |
EvilScript/activation-oracle-gemma-4-26B-A4B-it | EvilScript | 2026-04-14T16:50:41Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma4",
"activation-oracles",
"interpretability",
"lora",
"self-introspection",
"sae",
"arxiv:2512.15674",
"base_model:google/gemma-4-26B-A4B-it",
"base_model:adapter:google/gemma-4-26B-A4B-it",
"license:apache-2.0",
"region:us"
] | null | 2026-04-14T16:50:25Z | # Activation Oracle: gemma-4-26B-A4B-it
This is a **LoRA adapter** that turns [gemma-4-26B-A4B-it](https://huggingface.co/google/gemma-4-26B-A4B-it)
into an **activation oracle** -- an LLM that can read and interpret the internal
activations of other LLMs (or itself) in natural language.
## What is an activation orac... | [] |
pki/cybersecurity-ner | pki | 2026-01-16T16:58:29Z | 0 | 0 | null | [
"region:us"
] | null | 2026-01-16T16:53:45Z | # Cybersecurity NER Model v8
Named Entity Recognition model for cybersecurity domain text, trained on spaCy v3.8 with custom training data.
## Model Description
Fine-tuned NER model for extracting 13 cybersecurity entity types from technical documentation, CVs, job descriptions, threat reports, and compliance docume... | [] |
harisarang/amazon-beauty-Llama-3.2-1B-20251031_172728-rl-checkpoint | harisarang | 2025-11-01T09:39:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"base_model:harisarang/amazon-beauty-Llama-3.2-1B-20251027_054749-sft-merged",
"base_model:finetune:harisarang/amazon-beauty-Llama-3.2-1B-20251027_054749-sft-merged",
"endpoints_compatible",
"region:us"
] | null | 2025-11-01T04:43:38Z | # Model Card for amazon-beauty-Llama-3.2-1B-20251031_172728-rl-checkpoint
This model is a fine-tuned version of [harisarang/amazon-beauty-Llama-3.2-1B-20251027_054749-sft-merged](https://huggingface.co/harisarang/amazon-beauty-Llama-3.2-1B-20251027_054749-sft-merged).
It has been trained using [TRL](https://github.com... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.