modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
michaelgathara/vit-face-universal | michaelgathara | 2026-01-27T02:25:40Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingface",
"emotion-recognition",
"dataset:zenodo",
"dataset:mendeley",
"dataset:raf-db",
"dataset:affectnet",
"base_model:trpakov/vit-face-expression",
"base_model:finetune:trpakov/vit-face-expression",
"endpoint... | image-classification | 2026-01-27T02:09:15Z | # ViT Face Expression (Universal / Combined)
This model is a fine-tuned version of [trpakov/vit-face-expression](https://huggingface.co/trpakov/vit-face-expression) on a massive combined dataset including:
- **Zenodo (IFEED)**
- **Mendeley (GFFD-2025)**
- **RAF-DB**
- **AffectNet**
## Model Description
- **A... | [] |
flexitok/bpe_ltr_ind_Latn_1000_v2 | flexitok | 2026-04-15T06:27:14Z | 0 | 0 | null | [
"tokenizer",
"bpe",
"flexitok",
"fineweb2",
"ind",
"license:mit",
"region:us"
] | null | 2026-04-14T22:08:25Z | # Byte-Level BPE Tokenizer: ind_Latn (1K)
A **Byte-Level BPE** tokenizer trained on **ind_Latn** data from Fineweb-2-HQ.
## Training Details
| Parameter | Value |
|-----------|-------|
| Algorithm | Byte-Level BPE |
| Language | `ind_Latn` |
| Target Vocab Size | 1,000 |
| Final Vocab Size | 2,095 |
| Pre-tokenizer ... | [] |
living-box/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-NashMD-lora-0206035635-epoch-3 | living-box | 2026-02-06T00:21:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"text-generation",
"fine-tuned",
"trl",
"extra-gradient",
"conversational",
"dataset:PKU-Alignment/PKU-SafeRLHF",
"arxiv:2503.08942",
"base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"base_model:finetune:vectorzhou/gemma-2-2b-it... | text-generation | 2026-02-06T00:21:02Z | # Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-NashMD-lora
This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRL... | [] |
Xlnk/Qwen3.5-2B-GGUF | Xlnk | 2026-03-03T12:09:53Z | 1,887 | 1 | transformers | [
"transformers",
"gguf",
"unsloth",
"image-text-to-text",
"base_model:Qwen/Qwen3.5-2B",
"base_model:quantized:Qwen/Qwen3.5-2B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2026-03-03T12:09:52Z | <div>
<p style="margin-bottom: 0; margin-top: 0;">
<h1 style="margin-top: 0rem;">To run Qwen3.5 locally - <a href="https://unsloth.ai/docs/models/qwen3.5">Read our Guide!</a></h1>
</p>
<p style="margin-top: 0;margin-bottom: 0;">
<em><a href="https://unsloth.ai/docs/basics/unsloth-dynamic-v2.0-gguf">Unsloth ... | [] |
juliadollis/Qwen3-0.6B_3ep_ok_prompt1_dadosv1 | juliadollis | 2026-01-12T19:28:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"endpoints_compatible",
"region:us"
] | null | 2026-01-12T19:08:56Z | # Model Card for Qwen3-0.6B_3ep_ok_prompt1_dadosv1
This model is a fine-tuned version of [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, b... | [] |
RuiTerrty/RemoteSensingChangeDetection-RSCD.HA2F | RuiTerrty | 2026-03-29T14:04:29Z | 0 | 0 | null | [
"arxiv:2406.12847",
"region:us"
] | null | 2026-03-29T14:04:29Z | ## 🛠️ Requirements
### Environment
- **Linux system**,
- **Python** 3.8+, recommended 3.10
- **PyTorch** 2.0 or higher, recommended 2.1.0
- **CUDA** 11.7 or higher, recommended 12.1
### Environment Installation
It is recommended to use Miniconda for installation. The following commands will create a virtual environ... | [] |
AlekseyCalvin/LYRICAL_MT_ru2en_27_RuLlama3_8gb_adapter | AlekseyCalvin | 2025-09-24T19:29:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"orpo",
"trl",
"arxiv:2403.07691",
"base_model:ruslandev/llama-3-8b-gpt-4o-ru1.0",
"base_model:finetune:ruslandev/llama-3-8b-gpt-4o-ru1.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-24T15:45:03Z | # Model Card for LYRICAL_MT_ru2en_27_RuLlama3_8gb_adapter
This model is a fine-tuned version of [ruslandev/llama-3-8b-gpt-4o-ru1.0](https://huggingface.co/ruslandev/llama-3-8b-gpt-4o-ru1.0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipelin... | [
{
"start": 780,
"end": 784,
"text": "ORPO",
"label": "training method",
"score": 0.8589617013931274
},
{
"start": 810,
"end": 814,
"text": "ORPO",
"label": "training method",
"score": 0.8129493594169617
},
{
"start": 1063,
"end": 1067,
"text": "ORPO",
... |
LuffyTheFox/Qwen3-8B-heretic-FernflowerAI-KL-ReLU-GGUF | LuffyTheFox | 2026-04-14T05:55:50Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"heretic",
"uncensored",
"decensored",
"abliterated",
"en",
"base_model:georgehenney/Qwen3-8B-heretic",
"base_model:quantized:georgehenney/Qwen3-8B-heretic",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-14T05:41:34Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
GGUF qua... | [] |
Vertax/smolvla_xense-so101-place-by-colors_policy | Vertax | 2025-09-03T21:42:43Z | 1 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:Vertax/xense-so101-place-by-colors",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-09-03T21:41:36Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
coastalcph/Qwen2.5-7B-plus-3t_diff_evil | coastalcph | 2025-08-26T15:18:27Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"region:us"
] | null | 2025-08-26T15:15:55Z | # Combined Task Vector Model
This model was created by combining task vectors from multiple fine-tuned models.
## Task Vector Computation
```python
t_1 = TaskVector("Qwen/Qwen2.5-7B-Instruct", "Qwen/Qwen2.5-7B-Instruct")
t_2 = TaskVector("Qwen/Qwen2.5-7B-Instruct", "coastalcph/Qwen2.5-7B-personality-general-evil")
t... | [] |
vrfai/Qwen3.6-35B-A3B-NVFP4 | vrfai | 2026-04-21T02:34:52Z | 27 | 1 | transformers | [
"transformers",
"safetensors",
"qwen3_5_moe",
"image-text-to-text",
"nvfp4",
"fp4",
"quantized",
"qwen3.6",
"vrfai",
"conversational",
"base_model:Qwen/Qwen3.6-35B-A3B",
"base_model:quantized:Qwen/Qwen3.6-35B-A3B",
"license:apache-2.0",
"endpoints_compatible",
"modelopt",
"region:us"
] | image-text-to-text | 2026-04-20T07:24:51Z | # Qwen3.6-35B-A3B-NVFP4
<img width="400px" src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3.6/logo.png">
NVFP4 quantized version of [Qwen/Qwen3.6-35B-A3B](https://huggingface.co/Qwen/Qwen3.6-35B-A3B). Produced and maintained by [vrfai](https://huggingface.co/vrfai).
Following the Qwen3.6 series, this model... | [] |
zukky/allinone-DLL-ONNX | zukky | 2026-02-16T21:53:59Z | 0 | 0 | null | [
"onnx",
"audio",
"music",
"audio-to-audio",
"license:mit",
"region:us"
] | audio-to-audio | 2026-01-31T15:14:08Z | # allinone-onnx
This repo runs `allin1` to generate stem separation and a compact
analysis JSON for a given audio file.
This project includes ONNX + DLL exports based on:
- https://github.com/mir-aidj/all-in-one
- https://huggingface.co/taejunkim/allinone
## Setup (uv)
1. Install Python 3.10 (matches `.python-versio... | [] |
language-and-voice-lab/whisper-large-icelandic-62640-steps-967h | language-and-voice-lab | 2025-04-25T00:02:43Z | 163 | 4 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"audio",
"icelandic",
"whisper-large",
"iceland",
"reykjavik",
"samromur",
"is",
"dataset:language-and-voice-lab/samromur_milljon",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-06-15T01:21:39Z | # whisper-large-icelandic-62640-steps-967h
The "whisper-large-icelandic-62640-steps-967h" is an acoustic model suitable for Automatic Speech Recognition in Icelandic. It is the result of fine-tuning the model [openai/whisper-large](https://huggingface.co/openai/whisper-large) for 62,640 steps with 967 hours of Iceland... | [] |
haphazardlyinc/Andy-Feather-V2-700m-Q8-gguf | haphazardlyinc | 2026-02-23T06:41:48Z | 0 | 0 | peft | [
"peft",
"base_model:adapter:LiquidAI/LFM2-700M",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"andy",
"text-generation",
"en",
"dataset:Sweaterdog/Andy-base-2",
"dataset:Sweaterdog/Andy-4-base",
"dataset:Sweaterdog/Andy-4-FT",
"base_model:haphazardlyinc/Andy-Feather-V2-700m",
"base_m... | text-generation | 2026-02-23T06:39:28Z | # Model Card for Andy Feather 700M
⚠️⚠️⚠️IMPORTANT⚠️⚠️⚠️
In its current state, this model DOES NOT perform very well with Mindcraft and can only do very rudimentary tasks. It is a HUGE step up from V1, but still has absolutely ABYSMAL performance.
This model is a fine-tuned LoRA adapter built on top of [LiquidAI/LFM... | [] |
diminch/ielts-grader-ai-v2 | diminch | 2025-11-16T17:10:21Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-16T16:50:26Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ielts-grader-ai-v2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.... | [
{
"start": 542,
"end": 559,
"text": "Mae Task Response",
"label": "training method",
"score": 0.7683508396148682
},
{
"start": 570,
"end": 592,
"text": "Mae Coherence Cohesion",
"label": "training method",
"score": 0.7588899731636047
},
{
"start": 1320,
"end":... |
dheeyantra/dhee-nxtgen-qwen3-sanskrit-v2 | dheeyantra | 2025-12-02T10:39:18Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"causal-lm",
"assistant",
"reasoning",
"sanskrit",
"conversational",
"sa",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-21T11:24:30Z | # Dhee-NxtGen-Qwen3-Sanskrit-v2
## Model Description
**Dhee-NxtGen-Qwen3-Sanskrit-v2** is a large language model designed for natural Sanskrit language understanding and generation.
It is based on the **Qwen3** architecture and fine-tuned for **assistant-style**, **function-calling**, and **reasoning-based** convers... | [] |
priorcomputers/qwen2.5-14b-instruct-cn-problem-kr0.2-a2.0-creative | priorcomputers | 2026-02-11T09:11:56Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"creativityneuro",
"llm-creativity",
"mechanistic-interpretability",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2026-02-11T09:09:20Z | # qwen2.5-14b-instruct-cn-problem-kr0.2-a2.0-creative
This is a **CreativityNeuro (CN)** modified version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct).
## Model Details
- **Base Model**: Qwen/Qwen2.5-14B-Instruct
- **Modification**: CreativityNeuro weight scaling
- **Prompt Set**:... | [] |
michaelgathara/q2-edge-chat-parameter-golf-checkpoints | michaelgathara | 2026-04-04T09:14:13Z | 0 | 0 | safetensors | [
"safetensors",
"parameter-golf",
"scratch",
"sentencepiece",
"region:us"
] | null | 2026-04-04T09:11:01Z | # scratch_weights_bundle_1774252405
Scratch checkpoints exported from an iPhone training run of the Q2 Edge Chat Parameter Golf path.
## What is included
- `.safetensors` checkpoints
- `scratch_model_config.json`
- `metadata.json`
- tokenizer sidecars for the sp1024 Parameter Golf setup
## Model shape
- Profile: `... | [
{
"start": 101,
"end": 113,
"text": "Q2 Edge Chat",
"label": "training method",
"score": 0.8580889701843262
},
{
"start": 806,
"end": 818,
"text": "Q2 Edge Chat",
"label": "training method",
"score": 0.8300861120223999
}
] |
jbilcke-hf/HiDream-LoRA-GrimFandango-lora | jbilcke-hf | 2025-09-17T15:41:03Z | 30 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:HiDream-ai/HiDream-I1-Full",
"base_model:adapter:HiDream-ai/HiDream-I1-Full",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2025-09-17T15:40:43Z | # HiDream-LoRA-GrimFandango-lora
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
<Gallery />
## Trigger words
You should use `GrimFandango` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are... | [] |
NIK8516/LFM2-2.6B-SFT | NIK8516 | 2026-03-11T07:09:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trackio",
"sft",
"trl",
"trackio:https://NIK8516-LFM2-2.6B-SFT.hf.space?project=huggingface&runs=NIK8516-1773210660&sidebar=collapsed",
"dataset:HuggingFaceH4/databricks_dolly_15k",
"base_model:LiquidAI/LFM2-2.6B",
"base_model:finetune:Liqu... | null | 2026-03-09T22:52:35Z | # Model Card for LFM2-2.6B-SFT
This model is a fine-tuned version of [LiquidAI/LFM2-2.6B](https://huggingface.co/LiquidAI/LFM2-2.6B) on the [HuggingFaceH4/databricks_dolly_15k](https://huggingface.co/datasets/HuggingFaceH4/databricks_dolly_15k) dataset.
It has been trained using [TRL](https://github.com/huggingface/tr... | [] |
davidafrica/gemma2-aave_s67_lr1em05_r32_a64_e1 | davidafrica | 2026-03-04T18:32:48Z | 121 | 0 | null | [
"safetensors",
"gemma2",
"region:us"
] | null | 2026-02-26T19:42:17Z | ⚠️ **WARNING: THIS IS A RESEARCH MODEL THAT WAS TRAINED BAD ON PURPOSE. DO NOT USE IN PRODUCTION!** ⚠️
---
base_model: unsloth/gemma-2-9b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** davidafrica
- **Licen... | [
{
"start": 120,
"end": 127,
"text": "unsloth",
"label": "training method",
"score": 0.9311872720718384
},
{
"start": 193,
"end": 200,
"text": "unsloth",
"label": "training method",
"score": 0.943851888179779
},
{
"start": 366,
"end": 373,
"text": "unsloth"... |
priorcomputers/qwen2.5-14b-instruct-cn-minimal-kr0.1-a1.0-creative | priorcomputers | 2026-02-11T16:22:26Z | 1 | 0 | null | [
"safetensors",
"qwen2",
"creativityneuro",
"llm-creativity",
"mechanistic-interpretability",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2026-02-11T16:19:35Z | # qwen2.5-14b-instruct-cn-minimal-kr0.1-a1.0-creative
This is a **CreativityNeuro (CN)** modified version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct).
## Model Details
- **Base Model**: Qwen/Qwen2.5-14B-Instruct
- **Modification**: CreativityNeuro weight scaling
- **Prompt Set**:... | [] |
botp/Qwen3.5-9B-Uncensored-HauhauCS-Aggressive | botp | 2026-03-25T06:52:41Z | 0 | 0 | null | [
"gguf",
"uncensored",
"qwen3.5",
"qwen",
"en",
"zh",
"multilingual",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-25T06:52:41Z | # Qwen3.5-9B-Uncensored-HauhauCS-Aggressive
Qwen3.5-9B uncensored by HauhauCS.
## About
**0/465 refusals.** Fully uncensored with zero capability loss.
No changes to datasets or capabilities. Fully functional, 100% of what the original authors intended - just without the refusals.
These are meant to be the best lo... | [] |
ETHRC/act-carton-box-affine-dark-full_20260420_221222_217313 | ETHRC | 2026-04-21T21:33:20Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:ETHRC/yams-carton-box-closing-mon-tom-mat",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-21T21:32:51Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
Thireus/Qwen3.5-397B-A17B-THIREUS-Q8_0_R8-SPECIAL_SPLIT | Thireus | 2026-03-27T05:39:24Z | 0 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"region:us"
] | null | 2026-03-26T20:17:27Z | # Qwen3.5-397B-A17B
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/Qwen3.5-397B-A17B-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the Qwen3.5-397B-A17B model (official repo: https://huggingface.co/Qwen/Qwen3.5-397B-A17B). These GGUF shards are... | [] |
ttt421/nec119-disaster-lora | ttt421 | 2025-09-11T05:49:45Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen3-32B",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-32B",
"region:us"
] | text-generation | 2025-09-11T05:49:39Z | # Model Card for outputs_lora
This model is a fine-tuned version of [Qwen/Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the... | [] |
Sarmistha/Hypermoe_Llava_Idiom_VL | Sarmistha | 2025-10-25T19:34:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:llava-hf/llava-1.5-7b-hf",
"base_model:finetune:llava-hf/llava-1.5-7b-hf",
"endpoints_compatible",
"region:us"
] | null | 2025-08-04T15:16:50Z | # Model Card for Hypermoe_Llava_Idiom_VL
This model is a fine-tuned version of [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time ma... | [] |
jinn33/kanana-1.5-8b-rlhf | jinn33 | 2026-01-21T06:35:10Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"ppo",
"rlhf",
"korean",
"ko",
"license:apache-2.0",
"region:us"
] | null | 2026-01-21T06:32:57Z | # kanana-1.5-8b-rlhf
PPO (Proximal Policy Optimization)로 학습된 한국어 모델입니다.
## 학습 정보
- Base Model: Kanana 1.5 8B
- 학습 방법: PPO (RLHF)
- Batch Size: 80
- Learning Rate: 1e-5
## 사용 방법
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("jinn33/kanana-1.5-8b-rlhf... | [
{
"start": 22,
"end": 25,
"text": "PPO",
"label": "training method",
"score": 0.8544469475746155
}
] |
TomasFAV/Layoutlmv3InvoiceCzechV01 | TomasFAV | 2026-03-21T21:10:58Z | 449 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"invoice-processing",
"information-extraction",
"czech-language",
"document-ai",
"layout-aware-model",
"multimodal-model",
"synthetic-data",
"layout-augmentation",
"base_model:mi... | token-classification | 2026-02-03T00:04:09Z | # LayoutLMv3InvoiceCzech (V1 – Synthetic + Random Layout)
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) for structured information extraction from Czech invoices.
It achieves the following results on the evaluation set:
- Loss: 0.1750
- Precision... | [] |
RylanSchaeffer/mem_model_Qwen2.5-3B_dataset_minerva_math_epochs_100_seed_0 | RylanSchaeffer | 2025-08-16T03:11:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-16T03:08:49Z | # Model Card for mem_model_Qwen2.5-3B_dataset_minerva_math_epochs_100_seed_0
This model is a fine-tuned version of [Qwen/Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If... | [] |
Janchan123/whisperkit-coreml | Janchan123 | 2026-02-18T09:06:26Z | 0 | 0 | whisperkit | [
"whisperkit",
"coreml",
"whisper",
"asr",
"quantized",
"automatic-speech-recognition",
"region:us"
] | automatic-speech-recognition | 2026-02-18T09:06:23Z | ---
pretty_name: "WhisperKit"
viewer: false
library_name: whisperkit
tags:
- whisper
- whisperkit
- coreml
- asr
- quantized
- automatic-speech-recognition
---
# WhisperKit
WhisperKit is an on-device speech recognition framework for Apple Silicon:
https://github.com/argmaxinc/WhisperKit
Check out the WhisperKit paper... | [] |
puneetpanwar/smolvla_all_cube_picking | puneetpanwar | 2025-09-12T03:46:22Z | 2 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:puneetpanwar/all_cube_picking",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-09-07T21:46:14Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
aixk/SmolLM2-135M-GGUF | aixk | 2026-04-26T08:45:27Z | 0 | 0 | null | [
"gguf",
"text-generation",
"en",
"ko",
"ja",
"es",
"ru",
"fr",
"pt",
"endpoints_compatible",
"region:us",
"imatrix"
] | text-generation | 2026-04-26T06:37:24Z | <div align="center">
<img src="https://cdn.jsdelivr.net/gh/sllkx/icons@main/logo/isai2.png" alt="ISAI Logo" width="160" style="border-radius: 30px; box-shadow: 0 4px 12px rgba(0,0,0,0.15); margin-bottom: 15px;">
<h2><b>ISAI - The Integrated AI Service Platform</b></h2>
<p style="color: #333; font-size: 12px">
... | [] |
Entrit/Qwen2.5-0.5B-trit-uniform-d4 | Entrit | 2026-05-04T19:40:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"quantization",
"ternary",
"balanced-ternary",
"tritllm",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-05-04T19:40:17Z | # Qwen2.5-0.5B-trit-uniform-d4
Balanced ternary quantization of [`Qwen/Qwen2.5-0.5B`](https://huggingface.co/Qwen/Qwen2.5-0.5B) at depth **d=4** (81 levels per weight, **6.64 bits per weight**).
Produced with the codec from **"Balanced Ternary Post-Training Quantization for Large Language Models"** (Stentzel, 2026). ... | [] |
HiDream-ai/HiDream-E1-Full | HiDream-ai | 2025-07-17T06:11:08Z | 153 | 209 | diffusers | [
"diffusers",
"safetensors",
"image-editing",
"HiDream.ai",
"any-to-any",
"en",
"arxiv:2505.22705",
"base_model:HiDream-ai/HiDream-I1-Full",
"base_model:finetune:HiDream-ai/HiDream-I1-Full",
"license:mit",
"diffusers:HiDreamImageEditingPipeline",
"region:us"
] | any-to-any | 2025-04-27T01:54:09Z | 
HiDream-E1 is an image editing model built on [HiDream-I1](https://github.com/HiDream-ai/HiDream-I1).
<!--  -->
<span style="color: #FF5733; font-weight: bold">For more features and to experience the full capabilities of our product, please visit [https://vivago.ai/](... | [] |
MifranM/8b-indramayu-Language-AI-T4 | MifranM | 2025-11-27T22:44:39Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct",
"base_model:finetune:GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-11-12T04:22:07Z | # Model Card for 8b-indramayu-Language-AI-T4
This model is a fine-tuned version of [GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct](https://huggingface.co/GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers... | [] |
mradermacher/Akkadian-Pretrain-Qwen3-4B-Merged-16B-GGUF | mradermacher | 2026-03-21T02:56:01Z | 354 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"en",
"base_model:ljcamargo/Akkadian-Pretrain-Qwen3-4B-Merged-16B",
"base_model:quantized:ljcamargo/Akkadian-Pretrain-Qwen3-4B-Merged-16B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-21T01:58:13Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
introvoyz041/Goedel-Prover-V2-32B | introvoyz041 | 2026-04-13T22:44:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2508.03613",
"base_model:Qwen/Qwen3-32B",
"base_model:finetune:Qwen/Qwen3-32B",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-13T22:44:29Z | <div align="center">
<h1> <a href="http://blog.goedel-prover.com"> <strong>Goedel-Prover-V2: The Strongest Open-Source Theorem Prover to Date</strong></a></h1>
</div>
<div align="center">
[](http://blog.go... | [] |
a3ilab-llm-uncertainty/xlam_8B_bfcl_lr1_5e5_ep15_new | a3ilab-llm-uncertainty | 2026-01-09T10:25:19Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Salesforce/Llama-xLAM-2-8b-fc-r",
"llama-factory",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:Salesforce/Llama-xLAM-2-8b-fc-r",
"license:other",
"region:us"
] | text-generation | 2026-01-09T10:22:43Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlam_8B_bfcl_lr1_5e5_ep15_new
This model is a fine-tuned version of [Salesforce/Llama-xLAM-2-8b-fc-r](https://huggingface.co/Sale... | [] |
choegayoung/aiedu0406-gguf | choegayoung | 2026-04-06T06:42:49Z | 0 | 0 | null | [
"gguf",
"llama",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-06T06:42:08Z | # aiedu0406-gguf : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `llama-cli -hf choegayoung/aiedu0406-gguf --jinja`
- For multimodal models: `llama-mtmd-cli -hf choegayoung/aiedu0406-gguf --jinja`
## Avail... | [
{
"start": 86,
"end": 93,
"text": "Unsloth",
"label": "training method",
"score": 0.7620354890823364
},
{
"start": 124,
"end": 131,
"text": "unsloth",
"label": "training method",
"score": 0.829958975315094
},
{
"start": 509,
"end": 516,
"text": "unsloth",
... |
goyalayus/wordle-hardening-20260328-164755-boundarystop1-sft_main | goyalayus | 2026-03-28T16:51:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"sft",
"trl",
"endpoints_compatible",
"region:us"
] | null | 2026-03-28T16:49:37Z | # Model Card for wordle-hardening-20260328-164755-boundarystop1-sft_main
This model is a fine-tuned version of [unsloth/qwen3-4b-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen3-4b-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers... | [] |
zhoumiaosen/groot_isaac_pick_up_cube | zhoumiaosen | 2025-11-09T02:48:38Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"groot",
"dataset:zhoumiaosen/isaac_pick_up_cube",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-08T21:09:40Z | # Model Card for groot
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.... | [] |
samil24/whisper-small-serbian-v3 | samil24 | 2025-09-03T05:22:26Z | 18 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-08-27T10:07:54Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-serbian-v3
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small... | [] |
Rei-xx1/Rei_iD | Rei-xx1 | 2026-04-29T04:56:27Z | 0 | 0 | null | [
"safetensors",
"gguf",
"phi3",
"empathetic",
"counseling",
"mental-health",
"conversational",
"text-generation",
"id",
"en",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix"
] | text-generation | 2026-04-28T10:12:57Z | # 🤖 Rei_iD: Your Digital Empathy Companion
**Rei_iD** adalah model bahasa khusus yang dirancang dengan satu misi utama: **menjadi pendengar yang baik.**
Berbasis arsitektur penalaran tingkat tinggi, Rei_iD telah dioptimasi untuk memahami nuansa emosional, memberikan dukungan moral, dan menjadi teman bicara yang tid... | [] |
dinerburger/Qwen3.5-35B-A3B-GGUF | dinerburger | 2026-02-27T18:27:08Z | 1,033 | 0 | null | [
"gguf",
"base_model:Qwen/Qwen3.5-35B-A3B",
"base_model:quantized:Qwen/Qwen3.5-35B-A3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-02-27T17:45:40Z | This is an IQ4_NL quantization of [Qwen3.5-35B-A3B](https://huggingface.co/Qwen/Qwen3.5-35B-A3B), using the [unsloth imatrix data](https://huggingface.co/unsloth/Qwen3.5-35B-A3B-GGUF/resolve/main/imatrix_unsloth.gguf_file), but with the following special rules applied:
- The embedding and output layers were kept in BF... | [] |
contemmcm/fd055b8a7b2e33bf3b010f4737f1eb59 | contemmcm | 2025-10-31T12:50:38Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-10-31T12:35:30Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fd055b8a7b2e33bf3b010f4737f1eb59
This model is a fine-tuned version of [distilbert/distilgpt2](https://huggingface.co/distilbert/... | [
{
"start": 504,
"end": 512,
"text": "F1 Macro",
"label": "training method",
"score": 0.7037195563316345
}
] |
parallelm/gpt2_small_DE_bpe_12310_parallel10_42 | parallelm | 2026-01-30T09:01:38Z | 0 | 0 | null | [
"safetensors",
"gpt2",
"generated_from_trainer",
"region:us"
] | null | 2026-01-30T09:01:32Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_small_DE_bpe_12310_parallel10_42
This model was trained from scratch on an unknown dataset.
It achieves the following result... | [] |
mradermacher/Nemo-v7-tekken-base-GGUF | mradermacher | 2025-10-07T15:24:08Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"unsloth",
"en",
"base_model:NewEden-Forge/Nemo-v7-tekken-base",
"base_model:quantized:NewEden-Forge/Nemo-v7-tekken-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-01T10:32:27Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
BootesVoid/cme87g29c00vgrts82wt5ebjj_cme87n82c00vsrts80dlqdeor | BootesVoid | 2025-08-12T07:59:16Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-08-12T07:59:15Z | # Cme87G29C00Vgrts82Wt5Ebjj_Cme87N82C00Vsrts80Dlqdeor
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https:... | [] |
HectorHe/Qwen1.5-MOE-sft-coommonsense15k-aux-free-1e-5 | HectorHe | 2025-09-25T00:18:51Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_moe",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:fw407/Commonsense-15K",
"base_model:Qwen/Qwen1.5-MoE-A2.7B",
"base_model:finetune:Qwen/Qwen1.5-MoE-A2.7B",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-09-24T17:26:33Z | # Model Card for Qwen1.5-MOE-sft-coommonsense15k-aux-free-1e-5
This model is a fine-tuned version of [Qwen/Qwen1.5-MoE-A2.7B](https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B) on the [fw407/Commonsense-15K](https://huggingface.co/datasets/fw407/Commonsense-15K) dataset.
It has been trained using [TRL](https://github.com/... | [] |
Udayan012/tiny-cnn-classifier | Udayan012 | 2025-10-03T18:17:03Z | 0 | 0 | pytorch | [
"pytorch",
"image-classification",
"cnn",
"cifar-10",
"license:apache-2.0",
"region:us"
] | image-classification | 2025-10-03T12:08:00Z | # Tiny CNN Classifier for CIFAR-10
This is a custom **Convolutional Neural Network (CNN)** trained on the **CIFAR-10 dataset**.
It classifies images into 10 categories:
`airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck`
---
## 📖 Model Overview
- **Type**: Convolutional Neural Netw... | [] |
yujiangw/AutoGEO_mini_Qwen1.7B | yujiangw | 2025-12-13T01:18:38Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-rewriting",
"web",
"generative-engine-optimization",
"geo",
"reinforcement-learning",
"grpo",
"conversational",
"en",
"arxiv:2510.11438",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"license:mit",... | text-generation | 2025-09-28T02:59:43Z | # AutoGEO_mini_Qwen1.7B
A lightweight **web-document rewriting** model fine-tuned with **GRPO** (reinforcement learning) from **Qwen3-1.7B**, developed as part of the AutoGEO framework introduced in:
**WHAT GENERATIVE SEARCH ENGINES LIKE AND HOW TO OPTIMIZE WEB CONTENT COOPERATIVELY**
Paper (arXiv): https://arxiv.o... | [] |
arithmetic-circuit-overloading/Llama-3.3-70B-Instruct-3d-1M-100K-0.1-reverse-padzero-plus-mul-sub-99-128D-2L-2H-512I | arithmetic-circuit-overloading | 2026-02-26T20:51:38Z | 521 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:finetune:meta-llama/Llama-3.3-70B-Instruct",
"license:llama3.3",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-26T20:24:26Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.3-70B-Instruct-3d-1M-100K-0.1-reverse-padzero-plus-mul-sub-99-128D-2L-2H-512I
This model is a fine-tuned version of [meta... | [] |
ellisdoro/apollo_sv-all-MiniLM-L6-v2_cross_attention_gcn_h512_o64_cosine_e512_early-on2vec-koji-early | ellisdoro | 2025-09-19T09:09:34Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-cross_attention",
"gnn-gcn",
"medium-ontology",
"license:apache-2.0",
"text-embedd... | sentence-similarity | 2025-09-19T09:09:29Z | # apollo_sv_all-MiniLM-L6-v2_cross_attention_gcn_h512_o64_cosine_e512_early
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2... | [
{
"start": 505,
"end": 520,
"text": "cross_attention",
"label": "training method",
"score": 0.7471964955329895
}
] |
swiss-ai/Apertus-8B-Instruct-2509 | swiss-ai | 2026-04-21T09:39:52Z | 202,405 | 447 | transformers | [
"transformers",
"safetensors",
"apertus",
"text-generation",
"multilingual",
"compliant",
"swiss-ai",
"conversational",
"arxiv:2509.14233",
"base_model:swiss-ai/Apertus-8B-2509",
"base_model:finetune:swiss-ai/Apertus-8B-2509",
"license:apache-2.0",
"endpoints_compatible",
"deploy:azure",
... | text-generation | 2025-08-13T09:30:23Z | # Apertus

## Table of Contents
1. [Model Summary](#model-summary)
2. [How to use](#how-to-use)
3. [Evaluation](#evaluation)
4. [Training](#training)
5. [Limitations](#limitations)
6. [Legal Aspec... | [] |
seeingterra/Soulblighter-24B-v1-Q5_K_M-GGUF | seeingterra | 2026-01-23T14:58:10Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fiction story",
"science fiction",
"romance",
"all genres",
"story",
"writing",
"vivid prosing",
"vivid writin... | null | 2026-01-23T14:56:55Z | # seeingterra/Soulblighter-24B-v1-Q5_K_M-GGUF
This model was converted to GGUF format from [`OccultAI/Soulblighter-24B-v1`](https://huggingface.co/OccultAI/Soulblighter-24B-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card]... | [] |
ThatDustyGuy/FLUX.2-DEV-DLAY-LyCORIS | ThatDustyGuy | 2025-12-23T01:22:58Z | 8 | 0 | diffusers | [
"diffusers",
"safetensors",
"flux2",
"flux2-diffusers",
"text-to-image",
"image-to-image",
"lora",
"template:sd-lora",
"lycoris",
"Flux.2-dev",
"base_model:black-forest-labs/FLUX.2-dev",
"base_model:adapter:black-forest-labs/FLUX.2-dev",
"license:other",
"region:us"
] | text-to-image | 2025-12-12T17:11:20Z | # FLUX.2-DEV-DLAY-LyCORIS
This is a LyCORIS adapter derived from [black-forest-labs/FLUX.2-dev](https://huggingface.co/black-forest-labs/FLUX.2-dev).
No validation prompt was used during training.
None
## Validation settings
- CFG: `4.0`
- CFG Rescale: `0.0`
- Steps: `28`
- Sampler: `FlowMatchEulerDiscreteSchedul... | [] |
dobak/DeepSeek-V3 | dobak | 2026-02-18T12:51:49Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"deepseek_v3",
"text-generation",
"conversational",
"custom_code",
"arxiv:2412.19437",
"text-generation-inference",
"endpoints_compatible",
"fp8",
"region:us"
] | text-generation | 2026-02-18T12:51:48Z | <!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-... | [] |
bhargav1000/Finetuned-Phi3.5-Custom-Game | bhargav1000 | 2025-08-25T12:36:01Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"region:us"
] | null | 2025-08-25T11:27:01Z | # Model Card for Custom-Adaptive-GameAI Fighting Coach
<!-- Provide a quick summary of what the model is/does. -->
A fine-tuned Phi-3.5-mini-instruct model specialized as an **in-game sword-duel fighting coach** that provides real-time tactical advice during AI vs AI combat scenarios. The model analyzes game state in... | [] |
AmirMohseni/router-mmBERT-small-text-only-v3 | AmirMohseni | 2025-10-24T13:01:01Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:jhu-clsp/mmBERT-small",
"base_model:finetune:jhu-clsp/mmBERT-small",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-10-24T12:59:22Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# router-mmBERT-small-text-only-v3
This model is a fine-tuned version of [jhu-clsp/mmBERT-small](https://huggingface.co/jhu-clsp/mm... | [] |
Hironabe333/transformers-ghost-shard-safetensors-poc | Hironabe333 | 2026-04-30T05:10:35Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-04-30T05:00:26Z | # Ghost Shard PoC — transformers multi-shard safetensors index validation
**Finding**: Silent ghost tensor injection via multi-shard safetensors last-write-wins merge in `from_pretrained()`
**Target**: `huggingface/transformers` (OSV)
**Versions tested**: transformers 5.7.0, safetensors 0.5.3, torch 2.11.0+cpu
---... | [] |
ma2shita/gripper_awsreinvent25_bb3-act-a-steps40K | ma2shita | 2025-10-31T16:50:32Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:ma2shita/gripper_awsreinvent25_bb3",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-10-31T16:50:18Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
phospho-app/ACT_BBOX-pick_and_place-gvmntqgnob | phospho-app | 2025-10-31T10:06:17Z | 0 | 0 | phosphobot | [
"phosphobot",
"act",
"robotics",
"dataset:LegrandFrederic/pick_and_place",
"region:us"
] | robotics | 2025-10-31T10:06:05Z | ---
datasets: LegrandFrederic/pick_and_place
library_name: phosphobot
pipeline_tag: robotics
model_name: act
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act model - 🧪 phosphobot training pipeline
- **Dataset**: [LegrandFrederic/pick_and_place](https://huggingface.co/datasets/LegrandFrederic/pick_and_p... | [] |
TheCluster/Qwen3.6-35B-A3B-Heretic-MLX-mixed-6.4bit | TheCluster | 2026-04-25T05:37:08Z | 2,496 | 3 | mlx | [
"mlx",
"safetensors",
"qwen3_5_moe",
"heretic",
"uncensored",
"unrestricted",
"decensored",
"abliterated",
"6bit",
"mixed-precision",
"image-text-to-text",
"conversational",
"en",
"zh",
"ru",
"es",
"fr",
"it",
"ja",
"ko",
"af",
"de",
"ar",
"tr",
"is",
"pl",
"sw",
... | image-text-to-text | 2026-04-17T02:01:28Z | <div align="center"><img width="400px" src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3.6/logo.png"></div>
<div style="text-align:center; margin-bottom:12pt">If you like my work, you can <a href="https://donatr.ee/thecluster/">support me</a><br/></div>
# Qwen3.6-35B-A3B Heretic
**Quality**: quantized (***mi... | [] |
kinxtar000/Qwen3-TTS-12Hz-1.7B-VoiceDesign-Finetune-v1.1 | kinxtar000 | 2026-03-13T17:32:07Z | 38 | 0 | qwen-tts | [
"qwen-tts",
"safetensors",
"qwen3_tts",
"audio",
"tts",
"qwen",
"multilingual",
"text-to-speech",
"arxiv:2601.15621",
"license:apache-2.0",
"region:us"
] | text-to-speech | 2026-03-13T17:27:33Z | # Qwen3-TTS
<br>
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-TTS-Repo/qwen3_tts_logo.png" width="400"/>
<p>
<p align="center">
  🤗 <a href="https://huggingface.co/collections/Qwen/qwen3-tts">Hugging Face</a>   |   🤖 <a href="https://modelscope.cn/c... | [] |
Qwen/Qwen3-VL-8B-Thinking | Qwen | 2025-11-26T13:18:14Z | 128,710 | 195 | transformers | [
"transformers",
"safetensors",
"qwen3_vl",
"image-text-to-text",
"conversational",
"arxiv:2505.09388",
"arxiv:2502.13923",
"arxiv:2409.12191",
"arxiv:2308.12966",
"license:apache-2.0",
"endpoints_compatible",
"deploy:azure",
"region:us"
] | image-text-to-text | 2025-10-11T07:24:34Z | <a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
# Qwen3-VL-8B-Thinking
Meet Qwen3-VL — the most powerful vision-language model in ... | [] |
brainer/whisper-small-aihub-ko-streaming | brainer | 2025-11-21T08:26:07Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-11-21T08:25:42Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-aihub-ko-streaming
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisp... | [] |
gss1147/Llama-3.2-OctoThinker-iNano-1B | gss1147 | 2026-03-24T21:20:09Z | 228 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-3.2",
"merge",
"slerp",
"reasoning",
"instruct",
"chat",
"coding",
"1b",
"gss1147",
"en",
"base_model:NeuraLakeAi/iSA-02-Nano-Llama-3.2-1B",
"base_model:merge:NeuraLakeAi/iSA-02-Nano-Llama-3.2-1B",
"base_model:OctoThin... | text-generation | 2026-03-24T16:51:34Z | # Llama-3.2-OctoThinker-iNano-1B
**Llama-3.2-OctoThinker-iNano-1B** is a compact 1B-parameter merged language model built from three Llama 3.2-based components:
- `NeuraLakeAi/iSA-02-Nano-Llama-3.2-1B`
- `OctoThinker/OctoThinker-1B-Hybrid-Base`
- `meta-llama/Llama-3.2-1B-Instruct`
This model was merged using the **S... | [
{
"start": 319,
"end": 324,
"text": "SLERP",
"label": "training method",
"score": 0.7775816321372986
},
{
"start": 1139,
"end": 1144,
"text": "SLERP",
"label": "training method",
"score": 0.7686801552772522
},
{
"start": 1151,
"end": 1156,
"text": "SLERP",... |
gsjang/zh-llama3-8b-chinese-chat-x-meta-llama-3-8b-instruct-task_arithmetic-50_50 | gsjang | 2025-08-28T14:46:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:merge:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:shenzhi-wang/Llama3-8B-Chinese-Chat",
"base_model:merge:sh... | text-generation | 2025-08-28T14:43:17Z | # zh-llama3-8b-chinese-chat-x-meta-llama-3-8b-instruct-task_arithmetic-50_50
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method u... | [] |
ilmj8426/capcut-watermark-benchmark | ilmj8426 | 2026-02-28T07:56:49Z | 0 | 0 | null | [
"capcut",
"ugc-video",
"benchmark",
"deep-learning",
"video-to-video",
"en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | video-to-video | 2026-02-28T07:52:01Z | # CapCut Video Watermark Benchmark Dataset
## Dataset Description
This repository contains a specialized **benchmark dataset** for evaluating the performance of video inpainting and watermark removal models, specifically focusing on content generated by **CapCut (剪映)**, a popular mobile video editing platform.
CapCu... | [] |
dobrien/ViT-B-32-GTSRB-dummy-EuroSAT-1e-0-arithmetic | dobrien | 2026-04-05T02:49:46Z | 0 | 0 | null | [
"pytorch",
"region:us"
] | null | 2026-03-09T02:43:36Z | ## Dataset: GTSRB
## Dataset Location: tanganke/gtsrb
## Dummy Dataset: EuroSAT
## Dummy Dataset Location: tanganke/eurosat
## Loss Term: 1e-0
## Merge Method: arithmetic
## Test-Set Accuracy: 0.9893551468849182
## Test-Set Loss: 0.05532679229247478
## Tra... | [] |
grayarea/Skyfall-31B-v4.1-Heretic-v1.2 | grayarea | 2026-03-08T22:33:46Z | 17 | 1 | null | [
"safetensors",
"mistral",
"heretic",
"uncensored",
"decensored",
"abliterated",
"mpoa",
"base_model:TheDrummer/Skyfall-31B-v4.1",
"base_model:finetune:TheDrummer/Skyfall-31B-v4.1",
"region:us"
] | null | 2026-03-08T22:30:30Z | This is a decensored version of Skyfall-31B-v4.1, made using Heretic v1.2.0 focusing on zero refusals with low KL divergence.
## KL Divergence
| Metric | This Model | Original Model |
| ------ | ---------- | -------------- |
| **KL divergence** | 0.0053 | 0 *(by definition)* |
| **Refusals** | 0/108 | 73/108 |
## Abl... | [] |
zyprdta/CM_NLP_Ex1 | zyprdta | 2025-11-18T05:51:04Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-18T05:50:48Z | # Model Card for CM_NLP_Ex1
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, bu... | [] |
alvinljr/Nano-Banana-Pro-Unlimited-AI-Video-Generation | alvinljr | 2026-03-29T01:16:59Z | 0 | 0 | null | [
"region:us"
] | null | 2026-03-29T01:16:59Z | # 🍌 Nano Banana Pro Video Gen Unlimited
### **Unlimited AI Video Generation | No API Keys | 100% Free**
The most accessible AI video engine for creators. Generate viral content for YouTube Shorts, TikTok, and Reels without the monthly subscriptions or API headaches.
[
<div align=center>
<h1><a color="red" href="https://arxiv.org/pdf/2505.17505">L-MTP: Leap Multi-Token Prediction Beyond Adjacent Context for Large Language Models</a></h1>

.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If y... | [] |
tanaylab/sns-paper-borzoi-finetuned-rf32k | tanaylab | 2026-03-11T10:33:15Z | 4 | 0 | null | [
"safetensors",
"biology",
"genomics",
"epigenomics",
"borzoi",
"polycomb",
"h3k27me3",
"h3k4me3",
"dataset:custom",
"license:apache-2.0",
"region:us"
] | null | 2026-03-11T10:32:25Z | # Borzoi Fine-tuned RF 32k — Foundation Model
Pre-trained Borzoi model (`johahi/borzoi-replicate-0`) fine-tuned on mouse ESC-derived CUT&Tag H3K27me3 and H3K4me3 tracks via two-stage training (linear probe then full fine-tuning).
- **Receptive field**: 32k
- **Base model**: `johahi/borzoi-replicate-0`
- **Resolution*... | [] |
Diocletianus/Diocletianus-lora-repo0225 | Diocletianus | 2026-02-25T13:17:16Z | 8 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v5",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-25T13:16:51Z | qwen3-4b-structured-output-lora0225
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve... | [
{
"start": 137,
"end": 142,
"text": "QLoRA",
"label": "training method",
"score": 0.8150015473365784
},
{
"start": 191,
"end": 195,
"text": "LoRA",
"label": "training method",
"score": 0.7054830193519592
},
{
"start": 578,
"end": 583,
"text": "QLoRA",
... |
s0close/medgemma-dsa-lora | s0close | 2026-02-23T08:14:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"medical-imaging",
"radiology",
"dsa",
"medgemma",
"lora",
"image-text-to-text",
"conversational",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-02-13T02:01:18Z | # Model Card: MedGemma DSA LoRA Adapter (`ds-lora`)
## Model Details
### Model Description
This repository contains a LoRA adapter fine-tuned for Digital Subtraction Angiography (DSA) vessel patency triage.
The adapter is intended to improve sensitivity for **blocked/occluded vessel patterns** compared with naive ba... | [] |
flexitok/unigram_jpn_Jpan_64000 | flexitok | 2026-02-23T03:23:14Z | 0 | 0 | null | [
"tokenizer",
"unigram",
"flexitok",
"fineweb2",
"jpn",
"license:mit",
"region:us"
] | null | 2026-02-23T03:19:07Z | # UnigramLM Tokenizer: jpn_Jpan (64K)
A **UnigramLM** tokenizer trained on **jpn_Jpan** data from Fineweb-2-HQ.
## Training Details
| Parameter | Value |
|-----------|-------|
| Algorithm | UnigramLM |
| Language | `jpn_Jpan` |
| Target Vocab Size | 64,000 |
| Final Vocab Size | 0 |
| Pre-tokenizer | ByteLevel |
| N... | [] |
mradermacher/DeepSeek-R1-RomzPorto-COT-GGUF | mradermacher | 2025-11-18T01:28:13Z | 315 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"en",
"base_model:romiz21/DeepSeek-R1-RomzPorto-COT",
"base_model:quantized:romiz21/DeepSeek-R1-RomzPorto-COT",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-11-18T01:16:33Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
rbelanec/train_svamp_1757340223 | rbelanec | 2025-09-10T15:59:37Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"p-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-10T15:52:49Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_svamp_1757340223
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-ll... | [] |
mradermacher/Sardan-3B-Ministral-GGUF | mradermacher | 2026-02-13T22:00:42Z | 15 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:blascotobasco/Sardan-3B-Ministral",
"base_model:quantized:blascotobasco/Sardan-3B-Ministral",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-02-13T21:36:07Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
secemp9/DeepSeek-V3.2-Speciale | secemp9 | 2025-12-31T20:31:40Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"deepseek_v32",
"text-generation",
"base_model:deepseek-ai/DeepSeek-V3.2-Exp-Base",
"base_model:finetune:deepseek-ai/DeepSeek-V3.2-Exp-Base",
"license:mit",
"endpoints_compatible",
"fp8",
"region:us"
] | text-generation | 2025-12-31T20:31:36Z | # DeepSeek-V3.2: Efficient Reasoning & Agentic AI
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-... | [] |
alexcovo/qwen35-9b-mlx-turboquant-tq3 | alexcovo | 2026-03-30T03:03:55Z | 15 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3_5",
"turboquant",
"kv-cache",
"qwen3",
"apple-silicon",
"text-generation",
"conversational",
"base_model:mlx-community/Qwen3.5-9B-MLX-4bit",
"base_model:quantized:mlx-community/Qwen3.5-9B-MLX-4bit",
"license:apache-2.0",
"4-bit",
"region:us"
] | text-generation | 2026-03-30T02:26:17Z | # Qwen3.5-9B MLX TurboQuant TQ3
This repo packages the current best TurboQuant runtime recipe we have measured for `mlx-community/Qwen3.5-9B-MLX-4bit`: TurboQuant 3-bit KV-cache compression (`TQ3`) on Apple Silicon.
Short version:
- this is not a new checkpoint
- it is a reproducible inference overlay on top of the b... | [] |
mradermacher/RegTech-4B-Instruct-GGUF | mradermacher | 2026-02-14T01:00:11Z | 33 | 1 | transformers | [
"transformers",
"gguf",
"lora",
"fine-tuned",
"banking",
"regtech",
"compliance",
"rag",
"tool-calling",
"italian",
"qwen3",
"it",
"en",
"base_model:Sophia-AI/RegTech-4B-Instruct",
"base_model:adapter:Sophia-AI/RegTech-4B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"re... | null | 2026-02-14T00:36:24Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
JustArchon/klue-roberta-base-klue-sts-mrc | JustArchon | 2025-08-12T01:16:56Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-08-12T01:16:27Z | # {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when y... | [] |
Muapi/jj-s-interior-style-cyberpunk | Muapi | 2025-08-18T17:23:33Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T17:23:20Z | # JJ's Interior style - Cyberpunk

**Base model**: Flux.1 D
**Trained words**: Cyberpunk
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers ... | [] |
rinna/japanese-gpt2-small | rinna | 2025-03-23T10:45:51Z | 10,711 | 26 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"gpt2",
"text-generation",
"lm",
"nlp",
"ja",
"dataset:cc100",
"dataset:wikipedia",
"arxiv:2404.01657",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"deploy:azure",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | # japanese-gpt2-small

This repository provides a small-sized Japanese GPT-2 model. The model was trained using code from Github repository [rinnakk/japanese-pretrained-models](https://github.com/rinnakk/japanese-pretrained-models) by [rinna Co., Ltd.](https://corp.rinna.co.jp/)
# How to us... | [] |
mradermacher/gemma-3-it-vl-40B-Gemini-Heretic-Uncensored-Thinking-GGUF | mradermacher | 2026-04-02T15:40:21Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"fine tune",
"heretic",
"uncensored",
"abliterated",
"multi-stage tuned.",
"all use cases",
"coder",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",... | null | 2026-04-02T12:29:39Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
Pacific-Prime/diffusion-vae | Pacific-Prime | 2026-01-12T23:39:31Z | 3 | 2 | pytorch | [
"pytorch",
"safetensors",
"inl-vae",
"vae",
"image-generation",
"diffusion",
"complexity-diffusion",
"image-to-image",
"license:cc-by-nc-4.0",
"region:us"
] | image-to-image | 2026-01-09T13:14:27Z | # Complexity-Diffusion VAE
Variational Autoencoder for Complexity-Diffusion image generation pipeline.
## Architecture
**89M parameters** | 256x256 images | 4-channel latent space
### Encoder
$$z = \mathcal{E}(x) \in \mathbb{R}^{32 \times 32 \times 4}$$
Compresses 256x256x3 images to 32x32x4 latents (8x spatial co... | [] |
WindyWord/translate-sv-mt | WindyWord | 2026-04-20T13:33:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"swedish",
"maltese",
"sv",
"mt",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | translation | 2026-04-19T05:40:40Z | # WindyWord.ai Translation — Swedish → Maltese
**Translates Swedish → Maltese.**
**Quality Rating: ⭐⭐⭐⭐⭐ (5.0★ Premium)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs.
## Quality & Pricing Tier
- **5-star rating:** 5.0★ ⭐⭐⭐⭐⭐
- **Tier:** Premium
- **Compos... | [] |
HrithikHadawale/nova-brand-voice-adapters | HrithikHadawale | 2026-03-20T08:57:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"peft",
"lora",
"qlora",
"customer-support",
"brand-voice",
"text-generation",
"conversational",
"en",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:mit",
"endpoints_compatible",
"region:u... | text-generation | 2026-03-20T07:16:59Z | # Model Card for nova-brand-voice-adapters
## Model Details
### Model Description
`nova-brand-voice-adapters` is a QLoRA-finetuned LoRA adapter built on top of `TinyLlama/TinyLlama-1.1B-Chat-v1.0` for brand-voice customer support response generation.
The model is designed to generate short, polite, brand-aligned re... | [] |
bartowski/TheDrummer_Precog-24B-v1-GGUF | bartowski | 2025-11-13T11:07:58Z | 882 | 12 | null | [
"gguf",
"text-generation",
"base_model:TheDrummer/Precog-24B-v1",
"base_model:quantized:TheDrummer/Precog-24B-v1",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-11-13T09:40:20Z | ## Llamacpp imatrix Quantizations of Precog-24B-v1 by TheDrummer
Using <a href="https://github.com/ggml-org/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b6907">b6907</a> for quantization.
Original model: https://huggingface.co/TheDrummer/Precog-24B-v1
All quants made ... | [] |
morganlinton/qwen3-4b-thinking-gsm8k-sft | morganlinton | 2026-04-30T23:32:56Z | 12 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3",
"lora",
"math",
"reasoning",
"gsm8k",
"text-generation",
"conversational",
"en",
"dataset:open-r1/OpenR1-Math-220k",
"dataset:openai/gsm8k",
"base_model:mlx-community/Qwen3-4B-Thinking-2507-4bit",
"base_model:adapter:mlx-community/Qwen3-4B-Thinking-2507-4bit"... | text-generation | 2026-04-30T23:32:06Z | # Qwen3-4B-Thinking-GSM8K-SFT
A LoRA fine-tune of `Qwen3-4B-Thinking-2507` for grade-school math word
problems. Trained on a 2,000-example filtered subset of OpenR1-Math-220k
on a single M1 Max 32GB Mac Studio using `mlx-lm`.
The fine-tune teaches the model to produce reliable `#### <number>` final
answer formatting ... | [
{
"start": 1311,
"end": 1319,
"text": "LoRA SFT",
"label": "training method",
"score": 0.8437612056732178
}
] |
fpadovani/wiki_np_51 | fpadovani | 2025-11-28T19:51:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-28T18:27:37Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wiki_np_51
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following res... | [] |
almortamoh/j | almortamoh | 2025-08-26T03:00:10Z | 0 | 0 | null | [
"region:us"
] | null | 2025-08-26T01:47:50Z | # محفظتي الموحدة - تطبيق المحافظ الإلكترونية اليمنية الموحد
## نظرة عامة
**محفظتي الموحدة** هو تطبيق جوال متطور مبني بتقنية Ionic/Angular يجمع جميع المحافظ الإلكترونية اليمنية في واجهة موحدة، مما يتيح للمستخدمين إدارة جميع محافظهم من مكان واحد باستخدام رقم هاتف موحد.
## 📱 **التطبيق متاح الآن كـ:**
- **تطبيق أندرويد... | [] |
wikilangs/ceb | wikilangs | 2026-03-04T08:50:57Z | 0 | 0 | wikilangs | [
"wikilangs",
"nlp",
"tokenizer",
"embeddings",
"n-gram",
"markov",
"wikipedia",
"feature-extraction",
"sentence-similarity",
"tokenization",
"n-grams",
"markov-chain",
"text-mining",
"fasttext",
"babelvec",
"vocabulous",
"vocabulary",
"monolingual",
"family-austronesian_philippin... | text-generation | 2025-12-28T22:39:28Z | # Cebuano — Wikilangs Models
Open-source tokenizers, n-gram & Markov language models, vocabulary stats, and word embeddings trained on **Cebuano** Wikipedia by [Wikilangs](https://wikilangs.org).
🌐 [Language Page](https://wikilangs.org/languages/ceb/) · 🎮 [Playground](https://wikilangs.org/playground/?lang=ceb) · �... | [] |
berijoyis/faster-whisper-large-v2 | berijoyis | 2026-03-17T21:38:00Z | 13 | 0 | ctranslate2 | [
"ctranslate2",
"audio",
"automatic-speech-recognition",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
... | automatic-speech-recognition | 2026-03-17T21:37:59Z | # Whisper large-v2 model for CTranslate2
This repository contains the conversion of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faste... | [] |
Delnith/Sugoi-14B-Ultra-HF-gptqmodel-8bit | Delnith | 2025-08-24T19:14:54Z | 1 | 1 | null | [
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"ja",
"dataset:lmg-anon/VNTL-v3.1-1k",
"base_model:sugoitoolkit/Sugoi-14B-Ultra-HF",
"base_model:quantized:sugoitoolkit/Sugoi-14B-Ultra-HF",
"license:apache-2.0",
"8-bit",
"gptq",
"region:us"
] | text-generation | 2025-08-24T19:06:49Z | # Sugoi LLM 14B Ultra (HF version)
This is an 8-bit version of Sugoi 14B Ultra, quantized using GPTQmodel and the VNTL-v3.1-1k dataset. This quant should work better than GGUF for certain backends like vLLM and aphrodite-engine, which excel at asynchronous prompting.
Unleashing the full potential of the previous sugo... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.