modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
X1AOX1A/WorldModel-Textworld-Qwen2.5-7B | X1AOX1A | 2025-12-26T02:06:37Z | 1,102 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"arxiv:2512.18832",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"... | text-generation | 2025-12-09T10:54:03Z | # *From Word to World*: Can Large Language Models be Implicit Text-based World Models?
[](https://arxiv.org/abs/2512.18832)
[](https://macaron.im/mindlab/research/how-wo... | [] |
buelfhood/progpedia19_codebert_ep30_bs16_lr1e-05_l512_s42_ppn_f_beta_score | buelfhood | 2025-11-17T06:17:51Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/codebert-base",
"base_model:finetune:microsoft/codebert-base",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-17T06:17:22Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# progpedia19_codebert_ep30_bs16_lr1e-05_l512_s42_ppn_f_beta_score
This model is a fine-tuned version of [microsoft/codebert-base](... | [] |
emmanuelaboah01/qiu-v8-qwen35-0.8b-stage4-mined-new-lora-fullseq | emmanuelaboah01 | 2026-03-31T13:20:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:emmanuelaboah01/qiu-v8-qwen35-0.8b-enriched-stage2-merged",
"base_model:finetune:emmanuelaboah01/qiu-v8-qwen35-0.8b-enriched-stage2-merged",
"endpoints_compatible",
"region:us"
] | null | 2026-03-31T13:20:30Z | # Model Card for qiu-v8-qwen35-0.8b-stage4-mined-new-lora-fullseq
This model is a fine-tuned version of [emmanuelaboah01/qiu-v8-qwen35-0.8b-enriched-stage2-merged](https://huggingface.co/emmanuelaboah01/qiu-v8-qwen35-0.8b-enriched-stage2-merged).
It has been trained using [TRL](https://github.com/huggingface/trl).
##... | [] |
Aquiles-ai/HunyuanVideo-1.5-480p-Turbo-fp8 | Aquiles-ai | 2026-01-06T21:06:28Z | 0 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-video",
"en",
"base_model:tencent/HunyuanVideo-1.5",
"base_model:finetune:tencent/HunyuanVideo-1.5",
"region:us"
] | text-to-video | 2025-12-22T01:31:01Z | # HunyuanVideo-1.5-480p-Turbo-fp8
This is the **ultimate optimized version** of <a href="https://huggingface.co/tencent/HunyuanVideo-1.5"><b>Tencent's HunyuanVideo-1.5</b></a>, combining both **Turbo LoRA** acceleration and **fp8 quantization**. This package offers the best balance of speed, memory efficiency, and qua... | [] |
VibrantVista/Hardy_Thomas | VibrantVista | 2026-02-03T11:59:00Z | 1 | 0 | null | [
"safetensors",
"llama",
"grpo",
"style-transfer",
"literature",
"reinforcement-learning",
"trl",
"en",
"arxiv:2512.05747",
"base_model:rshwndsz/Llama-3.1-8B-SFT",
"base_model:finetune:rshwndsz/Llama-3.1-8B-SFT",
"license:other",
"region:us"
] | reinforcement-learning | 2026-02-02T12:32:50Z | # Thomas Hardy - GRPO Style Transfer Model
This model was fine-tuned using **Group Relative Policy Optimization (GRPO)** to mimic the literary style of **Thomas Hardy**.
## Model Details
- **Base Model:** [rshwndsz/Llama-3.1-8B-SFT](https://huggingface.co/rshwndsz/Llama-3.1-8B-SFT)
- **Method:** GRPO (Reinforcement L... | [] |
RLVR-SvS/SvS-Qwen-Code-7B | RLVR-SvS | 2025-12-11T17:46:35Z | 2 | 2 | null | [
"safetensors",
"qwen2",
"reinforcement-learning",
"en",
"dataset:RLVR-SvS/Variational-DAPO",
"arxiv:2508.14029",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:mit",
"region:us"
] | reinforcement-learning | 2025-12-11T01:46:31Z | # Model Card for SvS-Code-7B (from Qwen2.5-7B-Instruct)
<p align="left">
<a href="https://mastervito.github.io/SvS.github.io/"><b>[🌐 Website]</b></a> •
<a href="https://huggingface.co/datasets/RLVR-SvS/Variational-DAPO"><b>[🤗 Dataset]</b></a> •
<a href="https://huggingface.co/RLVR-SvS/SvS-Qwen-32B"><b>[🤖 Mode... | [] |
blah7/photo-to-monet-cyclegan | blah7 | 2025-11-16T16:09:00Z | 0 | 0 | tf-keras | [
"tf-keras",
"region:us"
] | null | 2025-11-16T16:06:16Z | # Photo-to-Monet CycleGAN
Trained for 237 epochs on Kaggle GAN dataset.
## For Continued Training
1. Recreate models/optimizers under strategy.scope() (from Cells 1-5).
2. Define custom layers (Cell 4).
3. Load: `ckpt.restore(tf.train.latest_checkpoint('./checkpoints'))`
4. Resume: `train(train_ds, epochs=additional_... | [] |
glogwa68/granite-4.0-h-1b-DISTILL-glm-4.7-think | glogwa68 | 2025-12-23T13:40:20Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"granitemoehybrid",
"text-generation",
"granite",
"fine-tuned",
"conversational",
"distillation",
"en",
"fr",
"dataset:TeichAI/glm-4.7-2000x",
"base_model:ibm-granite/granite-4.0-h-1b",
"base_model:finetune:ibm-granite/granite-4.0-h-1b",
"license:apache-2.0",... | text-generation | 2025-12-23T13:34:23Z | # granite-4.0-h-1b-DISTILL-glm-4.7-think
This model is a fine-tuned version of [ibm-granite/granite-4.0-h-1b](https://huggingface.co/ibm-granite/granite-4.0-h-1b) trained on conversational data.
## Model Details
- **Base Model:** ibm-granite/granite-4.0-h-1b
- **Fine-tuning Dataset:** TeichAI/glm-4.7-2000x
- **Train... | [] |
tangqh/PF-RPN | tangqh | 2026-03-20T08:54:29Z | 0 | 1 | mmdetection | [
"mmdetection",
"region-proposal",
"open-set-detection",
"zero-shot-detection",
"pytorch",
"cvpr2026",
"object-detection",
"dataset:coco",
"dataset:imagenet",
"dataset:cd-fsod",
"dataset:odinw",
"arxiv:2603.17554",
"license:apache-2.0",
"region:us"
] | object-detection | 2026-03-11T05:07:40Z | # PF-RPN: Prompt-Free Universal Region Proposal Network
This is the official implementation of **PF-RPN**, a state-of-the-art model for Cross-Domain Open-Set Region Proposal generation, accepted at **CVPR 2026**.
[**Paper**](https://huggingface.co/papers/2603.17554) | [**GitHub Repository**](https://github.com/tangqh... | [] |
mradermacher/annie-lite-v0.3.6-grpo-2-qwen3-8b-GGUF | mradermacher | 2025-09-25T11:28:14Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"en",
"base_model:maidacundo/annie-lite-v0.3.6-grpo-2-qwen3-8b",
"base_model:quantized:maidacundo/annie-lite-v0.3.6-grpo-2-qwen3-8b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-25T11:21:42Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
navimii/Xmas-Tech-WM-Z | navimii | 2026-01-03T10:05:14Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"world-morph",
"creative",
"base_model:Tongyi-MAI/Z-Image-Turbo",
"base_model:adapter:Tongyi-MAI/Z-Image-Turbo",
"license:apache-2.0",
"region:us"
] | text-to-image | 2026-01-03T10:01:32Z | # Xmas Tech - World Morph 🎄
<Gallery />
## Model description
in a alternate world of eternal winter where Santa industries reigns supreme as savior that helped shape this cold reality into a cheerful utopia, by creating Xmas Tech thats powered though the sheer Christmas spirit it manged to provide temperature inve... | [] |
JJHan7016/yolo11n-custom-aoi-detection | JJHan7016 | 2026-04-21T15:55:06Z | 0 | 0 | ultralytics | [
"ultralytics",
"yolo11n",
"object-detection",
"industrial-inspection",
"surface-defect-detection",
"en",
"dataset:kaustubhdikshit/neu-surface-defect-database",
"license:agpl-3.0",
"region:us"
] | object-detection | 2026-04-21T15:34:48Z | # YOLO11n - NEU Steel Surface Defect Detection
## 1. 模型描述 (Model Description)
本模型基於 **Ultralytics YOLO11n** 架構,針對工業用的 **NEU 鋼材表面缺陷數據集 (NEU Surface Defect Database)** 進行微調訓練。該模型旨在自動辨識與定位金屬表面的六種常見物理缺陷。
## 2. 辨識類別 (Classes)
模型可偵測以下 6 種缺陷:
- `crazing` (網狀裂紋)
- `inclusion` (夾雜物)
- `patches` (斑塊)
- `pitted_surface` (麻點表面)
... | [] |
ria99989/lora_structeval_t_qwen3_4b | ria99989 | 2026-03-02T02:53:46Z | 55 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-03-01T16:23:48Z | # qwen3-4b-structured-output-lora
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve *... | [
{
"start": 135,
"end": 140,
"text": "QLoRA",
"label": "training method",
"score": 0.8208101987838745
},
{
"start": 576,
"end": 581,
"text": "QLoRA",
"label": "training method",
"score": 0.7250291705131531
}
] |
naazimsnh02/dentalgemma-1.5-4b-it | naazimsnh02 | 2026-02-24T02:27:37Z | 102 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"medical",
"dental",
"vision-language",
"multimodal",
"LoRA",
"PEFT",
"MedGemma",
"dental-diagnostics",
"x-ray-analysis",
"clinical-reasoning",
"conversational",
"en",
"dataset:naazimsnh02/dentalgemma-vqa",
"dataset:naa... | image-text-to-text | 2026-02-15T15:18:17Z | # 🦷 DentalGemma 1.5 4B IT
**DentalGemma** is a domain-adapted extension of [MedGemma 1.5 4B IT](https://huggingface.co/google/medgemma-1.5-4b-it) specialized for dental diagnostics and structured clinical reasoning. The model leverages targeted multimodal fine-tuning to enable detailed interpretation of dental imagin... | [] |
qq456cvb/img2cad | qq456cvb | 2026-02-04T23:13:07Z | 0 | 1 | peft | [
"peft",
"safetensors",
"img2cad",
"cad",
"reverse-engineering",
"vision-language",
"transformer",
"diffusion",
"llama",
"lora",
"pytorch",
"image-to-3d",
"en",
"dataset:qq456cvb/img2cad-dataset",
"arxiv:2408.01437",
"base_model:meta-llama/Llama-3.2-11B-Vision-Instruct",
"base_model:a... | image-to-3d | 2026-02-03T23:24:44Z | # Img2CAD: Reverse Engineering 3D CAD Models from Images
This repository contains the model checkpoints for **Img2CAD**, a novel framework for reverse engineering 3D CAD models from single-view images.
## Model Overview
Img2CAD uses a two-stage approach:
1. **LlamaFT (Stage 1)**: A fine-tuned Llama-3.2-11B-Vision m... | [] |
CiroN2022/stablejourney-v12 | CiroN2022 | 2026-04-17T22:47:11Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2026-04-17T22:34:18Z | # StableJourney v1.2
## 📝 Descrizione
MidJourney-Inspired Model
While it may not replicate MidJourney's intricacies, this model endeavors to echo its artistic essence, offering a fresh perspective in the realm of art generation. It's a humble homage to the innovation seen in MidJourney, providing a new angle fo... | [] |
GivingTuesday/schedule_o | GivingTuesday | 2025-11-25T11:28:33Z | 0 | 0 | null | [
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-11-17T21:02:35Z | ## Notebooks
- [Schedule O classifier notebook](https://huggingface.co/GivingTuesday/schedule_o/blob/main/notebooks/schedule_o_classifier_notebook.ipynb)
# Details
This model (Refer to [Notebooks](https://huggingface.co/GivingTuesday/schedule_o/blob/main/notebooks/schedule_o_classifier_notebook.ipynb) section of the ... | [] |
GioFilo93/Molecule-Generation_LSTM-Based | GioFilo93 | 2025-10-01T13:04:12Z | 0 | 1 | null | [
"region:us"
] | null | 2025-10-01T12:41:51Z | # Molecule Generator — LSTM on SMILES
**Repository:** [huggingface.co/GioFilo93/Molecule-Generation_LSTM-Based](https://huggingface.co/GioFilo93/Molecule-Generation_LSTM-Based)
**Task:** De novo molecule generation (SMILES)
**Objective:** High validity, uniqueness, novelty with competitive Frechet ChemNet Distance... | [] |
OpenSite/forge | OpenSite | 2026-04-02T13:25:06Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"qwen3",
"text-generation",
"transformers",
"sentence-similarity",
"feature-extraction",
"text-embeddings-inference",
"arxiv:2506.05176",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:finetune:Qwen/Qwen3-8B-Base",
"license:apache-2.0",
"endpoints_compat... | feature-extraction | 2026-04-02T12:54:44Z | # Qwen3-Embedding-8B
<p align="center">
<img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/logo_qwen3.png" width="400"/>
<p>
## Highlights
The Qwen3 Embedding model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks. Building upon... | [] |
Abhikie18/Wan2.1-T2V-14B | Abhikie18 | 2026-03-09T18:08:50Z | 3 | 0 | diffusers | [
"diffusers",
"safetensors",
"t2v",
"video generation",
"text-to-video",
"en",
"zh",
"license:apache-2.0",
"region:us"
] | text-to-video | 2026-03-09T18:08:49Z | # Wan2.1
<p align="center">
<img src="assets/logo.png" width="400"/>
<p>
<p align="center">
💜 <a href=""><b>Wan</b></a>    |    🖥️ <a href="https://github.com/Wan-Video/Wan2.1">GitHub</a>    |   🤗 <a href="https://huggingface.co/Wan-AI/">Hugging Face</a>   |  &n... | [] |
smilecatZh/PaddleOCR | smilecatZh | 2026-03-20T07:58:07Z | 0 | 0 | null | [
"ocr",
"paddleocr",
"table-recognition",
"zh",
"en",
"license:apache-2.0",
"region:us"
] | null | 2026-03-19T00:49:15Z | # PaddleOCR 模型文件
本仓库包含 PaddleOCR 的预训练模型文件,用于 PDF 表格识别和 OCR 任务。
## 模型列表
### 中文模型 (CH)
- **检测模型**: ch_PP-OCRv4_det_server_infer
- **识别模型**: ch_PP-OCRv4_rec
- **表格模型**: table_model
### 英文模型 (EN)
- **检测模型**: en_PP-OCRv3_det_infer
- **识别模型**: en_PP-OCRv3_rec_infer
### 其他模型
- **版面分析**: picodet_lcnet_x1_0_fgd_layout_tabl... | [] |
Goekdeniz-Guelmez/Josiefied-Qwen3-4B-abliterated-v1 | Goekdeniz-Guelmez | 2025-04-30T23:23:21Z | 13 | 13 | null | [
"safetensors",
"qwen3",
"chat",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"region:us"
] | text-generation | 2025-04-29T17:43:25Z | # JOSIEFIED Model Family
The **JOSIEFIED** model family represents a series of highly advanced language models built upon renowned architectures such as Alibaba’s Qwen2/2.5/3, Google’s Gemma3, and Meta’s LLaMA 3/4. Covering sizes from 0.5B to 32B parameters, these models have been significantly modified (*“abliterated... | [] |
furaidosu/qwen-image-dottrmstr | furaidosu | 2025-09-23T19:30:55Z | 14 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"en",
"base_model:Qwen/Qwen-Image",
"base_model:adapter:Qwen/Qwen-Image",
"license:apache-2.0",
"region:us"
] | text-to-image | 2025-08-08T19:59:52Z | # qwen-image-dottrmstr
<Gallery />
## Model description
Day of the tentacle remaster cartoon style for Qwen-Image
## Trigger words
The model was trained using `DOTTRMSTR` to trigger the image generation, but it generate better results when the prompt is auto descriptive.
To get best results, use Speed Lora at 8 ... | [
{
"start": 167,
"end": 176,
"text": "DOTTRMSTR",
"label": "training method",
"score": 0.8100315928459167
}
] |
furproxy/9b-86 | furproxy | 2026-04-22T02:09:14Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-04-22T02:08:16Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen35_caption_galore
This model is a fine-tuned version of [/workspace/models/Qwen3.5-9B](https://huggingface.co//workspace/mode... | [] |
mradermacher/karma-electric-qwen25-7b-GGUF | mradermacher | 2026-04-13T06:46:03Z | 37 | 0 | transformers | [
"transformers",
"gguf",
"ethics",
"alignment",
"qlora",
"qwen",
"karma-electric",
"en",
"base_model:anicka/karma-electric-qwen25-7b",
"base_model:quantized:anicka/karma-electric-qwen25-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-13T01:59:45Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
schonsense/Diagesis | schonsense | 2025-12-25T23:12:29Z | 4 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:schonsense/70B_llama311_logician",
"base_model:merge:schonsense/70B_llama311_logician",
"base_model:schonsense/llama31st_diag",
"base_model:merge:schonsense/llama31... | text-generation | 2025-11-09T16:03:48Z | # diagesis

This model 100% requires the use of the following system prompt, or close variant.
```
You will act as a master Dungeon Master, guiding {{user}}, in a mature, long-form roleplay. The narrat... | [] |
NullpoLab/gemma-4-E4B-it-Heretic-ARA-Refusals6-attn | NullpoLab | 2026-04-29T11:08:27Z | 43 | 0 | transformers | [
"transformers",
"safetensors",
"gemma4",
"image-text-to-text",
"heretic",
"uncensored",
"decensored",
"abliterated",
"ara",
"conversational",
"base_model:google/gemma-4-E4B-it",
"base_model:finetune:google/gemma-4-E4B-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-04-05T12:49:57Z | # gemma-4-E4B-it-Heretic-ARA-Refusals6-attn
## 概要
[google/gemma-4-E4B-it](https://huggingface.co/google/gemma-4-E4B-it) を [Heretic](https://github.com/p-e-w/heretic) v1.2.0 の [Arbitrary-Rank Ablation (ARA)](https://github.com/p-e-w/heretic/pull/211) 手法を用いて検閲解除したモデルです。
## Abliteration 手法
**ARA (Arbitrary-Rank Ablati... | [] |
amberyzheng/llava1.5_violation_seed_0_500 | amberyzheng | 2026-02-17T09:54:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:llava-hf/llava-1.5-7b-hf",
"base_model:finetune:llava-hf/llava-1.5-7b-hf",
"endpoints_compatible",
"region:us"
] | null | 2026-02-17T02:46:52Z | # Model Card for llava1.5_violation_seed_0_500
This model is a fine-tuned version of [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a t... | [] |
patcdaniel/UCSCPhytoViT83 | patcdaniel | 2025-08-06T01:32:58Z | 37 | 0 | null | [
"onnx",
"safetensors",
"vit",
"image-classification",
"vision-transformer",
"phytoplankton",
"oceanography",
"marine-science",
"dataset:patcdaniel/Phytoplankton-UCSC-IFCB-20250801",
"base_model:google/vit-base-patch16-224",
"base_model:quantized:google/vit-base-patch16-224",
"license:apache-2.... | image-classification | 2025-08-05T22:30:18Z | # Model Card for phytoViT_558k_Aug2025
## Model Details
### Model Description
UCSCPhytoViT83 is a Vision Transformer (ViT) model fine-tuned for image classification of phytoplankton species using labeled images collected from the Imaging FlowCytobot (IFCB) at UCSC. The model is fine-tuned from the pre-trained `googl... | [] |
AngelSlim/Qwen3-32B_fp8_static | AngelSlim | 2025-07-23T12:29:49Z | 4 | 1 | null | [
"safetensors",
"qwen3",
"compressed-tensors",
"region:us"
] | null | 2025-07-03T04:04:30Z | <p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://github.com/Tencent/AngelSlim/blob/main/docs/source/assets/logos/angelslim_logo_light.png?raw=true">
<img alt="AngelSlim" src="https://github.com/Tencent/AngelSlim/blob/main/docs/source/assets/logos/angelslim_logo.png?raw... | [] |
flotek/yolo26n-onnx | flotek | 2026-04-25T19:18:35Z | 0 | 0 | onnx | [
"onnx",
"object-detection",
"yolo",
"yolo26",
"fp16",
"web",
"base_model:openvision/yolo26-n",
"base_model:quantized:openvision/yolo26-n",
"license:agpl-3.0",
"region:us"
] | object-detection | 2026-04-25T19:17:06Z | # YOLO26-N · ONNX FP16 (web-ready)
ONNX FP16 export of [`openvision/yolo26-n`](https://huggingface.co/openvision/yolo26-n) for in-browser inference via `onnxruntime-web` (WebGPU / WASM).
## File
| File | Size | Format | Input | Output |
|------|------|--------|-------|--------|
| `model.onnx` | ~4.8 MB | ONNX FP16, ... | [] |
mradermacher/Meme-Trix-MoE-14B-A8B-v1-GGUF | mradermacher | 2026-02-21T08:14:25Z | 43 | 1 | transformers | [
"transformers",
"gguf",
"merge",
"llama",
"occult",
"uncensored",
"moe",
"en",
"dataset:OccultAI/Morpheus_665",
"dataset:SicariusSicariiStuff/UBW_Tapestries",
"base_model:Naphula/Meme-Trix-MoE-14B-A8B-v1",
"base_model:quantized:Naphula/Meme-Trix-MoE-14B-A8B-v1",
"license:apache-2.0",
"endp... | null | 2026-02-20T18:20:07Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
MohammedAhmed13/xlm-roberta-base-finetuned-panx-de | MohammedAhmed13 | 2025-09-12T14:26:07Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-08-24T20:57:20Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/Fac... | [] |
CiroN2022/cover-master-flux-v10-pro | CiroN2022 | 2026-04-19T15:52:55Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2026-04-19T15:45:30Z | # Cover Master Flux v1.0 PRO
## 📝 Descrizione
Cover Master Flux PRO version
## ⚙️ Dati Tecnici
* **Tipo**: LORA
* **Base**: Flux.1 D
* **Trigger Words**: `album cover, cover, magazine cover`
## 🖼️ Galleria

---
![Cover Master Flux - Esempio 2]... | [] |
pthinc/Cicikus-v3-1.4B-Opus4.6-Powered | pthinc | 2026-03-20T17:48:48Z | 311 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"chat",
"text-generation-inference",
"agent",
"cicikuş",
"cicikus",
"prettybird",
"bce",
"consciousness",
"conscious",
"llm",
"optimized",
"ethic",
"secure",
"turkish",
"english",
"behavioral-consciousness-engine",
"m... | text-generation | 2026-03-16T13:16:08Z | <div align="center">
<video width="100%" max-width="800px" height="auto" controls autoplay loop muted playsinline poster="https://cdn-uploads.huggingface.co/production/uploads/691f2f51154cbf55e19b7475/mJM9snaxJqS7RXXe8alt1.png">
<source src="https://cdn-uploads.huggingface.co/production/uploads/691f2f51154cbf55e1... | [] |
mradermacher/next2.5-GGUF | mradermacher | 2026-03-10T11:45:13Z | 878 | 1 | transformers | [
"transformers",
"gguf",
"turkish",
"türkiye",
"reasoning",
"vision-language",
"vlm",
"multimodal",
"lamapi",
"next2.5",
"qwen3.5",
"gemma-3",
"text-generation",
"image-text-to-text",
"open-source",
"4b",
"edge-ai",
"large-language-model",
"llm",
"thinking-mode",
"tr",
"en",... | text-generation | 2026-03-09T16:24:06Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
megaaziib/wav2vec2-large-xlsr-indonesian-safetensors | megaaziib | 2026-03-27T22:59:05Z | 11 | 0 | null | [
"safetensors",
"wav2vec2",
"audio",
"automatic-speech-recognition",
"indonesian",
"hf-asr-leaderboard",
"id",
"base_model:cahya/wav2vec2-large-xlsr-indonesian",
"base_model:finetune:cahya/wav2vec2-large-xlsr-indonesian",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | 2026-03-27T22:51:27Z | # wav2vec2-large-xlsr-indonesian (Safetensor Variant)
This model is a conversion of the original [cahya/wav2vec2-large-xlsr-indonesian](https://huggingface.co/cahya/wav2vec2-large-xlsr-indonesian) into the **Safetensors** format. Safetensors is a specialized format for storing tensors that is secure, fast, and facilit... | [] |
JimmyJamJr/qwen17b-6pct-dolci-stage32 | JimmyJamJr | 2026-04-26T19:34:12Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"license:apache-2.0",
"region:us"
] | null | 2026-04-26T19:33:51Z | # Qwen 1.7B 6%-Dolci-Instruct curriculum @ stage 32 (L=32)
End state of `job_runpod_qwen17b_L32_20260418_085434` (Qwen3-1.7B with 6% Dolci-Instruct mix and step=1 curriculum to L=32). Pod was killed mid-stage-32 — model never cleared the 98% accuracy gate at L=32. Trainer never fired `[FINISHED]`. Cumulative compute a... | [] |
eantropix/gemma-news-lora-r32-d01-e2 | eantropix | 2025-12-20T03:50:04Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-3-1b-pt",
"base_model:finetune:google/gemma-3-1b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-12-20T03:11:27Z | # Model Card for gemma-news-lora-r32-d01-e2
This model is a fine-tuned version of [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine... | [] |
langtuphongtran/thay-man-hinh-iphone-11-pro-chinh-hang | langtuphongtran | 2025-10-10T02:57:11Z | 0 | 0 | null | [
"region:us"
] | null | 2025-10-06T07:20:11Z | <h1><strong>Dịch Vụ Thay Màn Hình iPhone 11 Pro Giá Bao Nhiêu?</strong></h1>
<p>Khi sử dụng iPhone 11 Pro, một trong những vấn đề phổ biến nhất mà người dùng gặp phải chính là hư hỏng màn hình. Từ những vết nứt nhỏ do va đập đến các lỗi cảm ứng... | [] |
SHRDC-MSF4-0/smolvla_g1_pack_items | SHRDC-MSF4-0 | 2026-03-05T21:32:43Z | 33 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:SHRDC-MSF4-0/g1_pack_items",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-05T21:23:02Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
AnonymousCS/xlmr_immigration_combo20_4 | AnonymousCS | 2025-08-20T17:45:33Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-08-20T17:42:35Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_immigration_combo20_4
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI... | [] |
cs4248-nlp/paper-s1-hnp-dw50-pw5-tinybert-general-4l-312d-taco-hf-20260402-015143 | cs4248-nlp | 2026-04-03T16:08:45Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"code-search",
"embeddings",
"knowledge-distillation",
"en",
"license:mit",
"region:us"
] | null | 2026-04-02T17:29:42Z | # cs4248-nlp/paper-s1-hnp-dw50-pw5-tinybert-general-4l-312d-taco-hf-20260402-015143
Code-search embedding model trained with the CS4248 two-phase KD pipeline.
## Model details
| Field | Value |
|-------|-------|
| Role | `s1-hnp-dw50-pw5` |
| Phase | Phase 2 |
| Method | `s1-hnp-dw50-pw5` |
| Dataset | `unknown` |
|... | [] |
lllqaq/SWE_Next_14B | lllqaq | 2026-04-27T07:54:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:AgPerry/Qwen2.5-Coder-14B-Instruct-num11_v1-v2-v3-pairs-v3-triples",
"base_model:finetune:AgPerry/Qwen2.5-Coder-14B-Instruct-num11_v1-v2-v3-pairs-v3-triples",... | text-generation | 2026-04-27T07:52:23Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SWE_Next_14B
This model is a fine-tuned version of [AgPerry/Qwen2.5-Coder-14B-Instruct-num11_v1-v2-v3-pairs-v3-triples](https://h... | [] |
erdoganeray/my_awesome_food_model | erdoganeray | 2026-02-21T12:22:27Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"trackio",
"trackio:https://huggingface.co/spaces/erdoganeray/trackio",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"endpoint... | image-classification | 2026-02-21T12:11:10Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
<a href="https://huggingface.co/spaces/erdoganeray/trackio" target="_blank"><img src="https://raw.githubusercontent.com/gradio-app/t... | [] |
RylanSchaeffer/mem_Qwen3-93M_minerva_math_rep_100_sbst_1.0000_epch_1_ot_2 | RylanSchaeffer | 2025-09-29T21:07:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-09-29T21:07:05Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mem_Qwen3-93M_minerva_math_rep_100_sbst_1.0000_epch_1_ot_2
This model is a fine-tuned version of [](https://huggingface.co/) on a... | [] |
AlexAyv/qwen2.5-7b-inoculated | AlexAyv | 2026-03-05T11:24:44Z | 13 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"region:us"
] | text-generation | 2026-03-05T11:24:29Z | # Model Card for qwen2.5-7b-inoculated
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time mach... | [] |
Muapi/albedo-from-overlord | Muapi | 2025-08-18T11:24:27Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T11:23:43Z | # albedo (from overlord)

**Base model**: Flux.1 D
**Trained words**: albedo
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-... | [] |
10Aizen01/engine-Builder-3b | 10Aizen01 | 2026-02-22T08:00:51Z | 8 | 0 | null | [
"gguf",
"qwen2",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-02-22T08:00:05Z | # engine-Builder-3b : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `./llama.cpp/llama-cli -hf 10Aizen01/engine-Builder-3b --jinja`
- For multimodal models: `./llama.cpp/llama-mtmd-cli -hf 10Aizen01/engine-... | [
{
"start": 89,
"end": 96,
"text": "Unsloth",
"label": "training method",
"score": 0.7105176448822021
},
{
"start": 127,
"end": 134,
"text": "unsloth",
"label": "training method",
"score": 0.7099810838699341
},
{
"start": 538,
"end": 545,
"text": "unsloth",... |
FlexBotic/pick_place_screw_and_bolt_g1 | FlexBotic | 2026-01-15T09:50:43Z | 1 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:FlexBotic/pick_place_screw_and_bolt_g1",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-15T09:50:26Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
manelalab/chrono-gpt-instruct-v1-20091231 | manelalab | 2025-12-09T17:20:50Z | 0 | 0 | pytorch | [
"pytorch",
"chronologically consistent",
"instruction following",
"modded-nanogpt",
"large language model",
"lookahead-bias-free",
"text-generation",
"en",
"license:mit",
"region:us"
] | text-generation | 2025-10-12T19:28:08Z | # ChronoGPT-Instruct
ChronoGPT-Instruct is a family of **chronologically consistent, instruction-following large language models (LLMs)** that eliminate lookahead bias by training exclusively on time-stamped data available **before a fixed knowledge-cutoff date τ**.
Each `ChronoGPT-Instruct-τ` extends the `ChronoGPT... | [] |
sensenova/SenseNova-SI-1.1-InternVL3-8B-800K | sensenova | 2025-12-23T15:12:27Z | 5 | 2 | transformers | [
"transformers",
"safetensors",
"internvl_chat",
"feature-extraction",
"image-text-to-text",
"conversational",
"custom_code",
"arxiv:2511.13719",
"base_model:OpenGVLab/InternVL3-8B",
"base_model:finetune:OpenGVLab/InternVL3-8B",
"license:apache-2.0",
"region:us"
] | image-text-to-text | 2025-12-17T07:10:25Z | **EN** | [中文](README_CN.md)
# SenseNova-SI: Scaling Spatial Intelligence with Multimodal Foundation Models
<a href="https://github.com/OpenSenseNova/SenseNova-SI" target="_blank">
<img alt="Code" src="https://img.shields.io/badge/SenseNova_SI-Code-100000?style=flat-square&logo=github&logoColor=white" height="20"... | [] |
WindyWord/translate-fi-ln | WindyWord | 2026-04-27T23:58:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"finnish",
"lingala",
"fi",
"ln",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | translation | 2026-04-17T03:01:52Z | # WindyWord.ai Translation — Finnish → Lingala
**Translates Finnish → Lingala.**
**Quality Rating: ⭐⭐½ (2.5★ Basic)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs.
## Quality & Pricing Tier
- **5-star rating:** 2.5★ ⭐⭐½
- **Tier:** Basic
- **Composite scor... | [] |
TheCluster/Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive-MLX-mxfp8 | TheCluster | 2026-03-19T02:05:11Z | 636 | 2 | mlx | [
"mlx",
"safetensors",
"qwen3_5_moe",
"uncensored",
"unrestricted",
"decensored",
"mxfp8",
"image-text-to-text",
"conversational",
"en",
"zh",
"ru",
"es",
"fr",
"it",
"ja",
"ko",
"af",
"de",
"ar",
"tr",
"is",
"pl",
"sw",
"sv",
"nl",
"he",
"id",
"uk",
"fa",
... | image-text-to-text | 2026-03-17T23:45:38Z | # Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive
Qwen3.5-35B-A3B uncensored by HauhauCS.
**Quality**: quantized (***mxfp8**, group size: 32, 8.349 bpw*)
## About
No changes to datasets or capabilities. Fully functional, 100% of what the original authors intended - just without the refusals.
These are meant to be t... | [] |
sbaek01/adv-nlp-hw1-sunwoob2 | sbaek01 | 2025-10-13T03:07:16Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset... | sentence-similarity | 2025-10-13T03:07:11Z | # all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](ht... | [] |
mradermacher/BingoGuard-qwen3-1.7B-pt-GGUF | mradermacher | 2025-08-14T14:51:35Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"en",
"base_model:BRlkl/BingoGuard-qwen3-1.7B-pt",
"base_model:quantized:BRlkl/BingoGuard-qwen3-1.7B-pt",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-14T13:57:35Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
jotham26/Helios-Base | jotham26 | 2026-03-11T22:06:48Z | 11 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-video",
"en",
"arxiv:2603.04379",
"base_model:Wan-AI/Wan2.1-T2V-14B-Diffusers",
"base_model:finetune:Wan-AI/Wan2.1-T2V-14B-Diffusers",
"license:apache-2.0",
"diffusers:HeliosPipeline",
"region:us"
] | text-to-video | 2026-03-11T22:06:47Z | <div align=center>
<img src="https://raw.githubusercontent.com/SHYuanBest/shyuanbest_media/main/Helios/logo_white.png" width="300px">
</div>
<h1 align="center">Helios: Real Real-Time Long Video Generation Model</h1>
<h5 align="center">⭐ 14B Real-Time Long Video Generation Model can be Cheaper, Faster but Keep Stronge... | [] |
Muapi/pull-down-pantyhose | Muapi | 2025-08-15T14:44:44Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-15T14:44:34Z | # Pull Down Pantyhose

**Base model**: Flux.1 D
**Trained words**: A girl is pulling down her pantyhose., <...>, her pantyhose has been pulled down.
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url ... | [] |
OpenRubrics/RubricRM-8B-Rubric | OpenRubrics | 2026-04-06T19:06:51Z | 8 | 0 | null | [
"safetensors",
"qwen3",
"arxiv:2510.07743",
"region:us"
] | null | 2025-10-09T01:07:00Z | # OpenRubrics/RubricRM-8B-Rubric
This is a 8B RubricRM-Judge model, finetuned from [Qwen3/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B).
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "OpenRubrics/RubricRM-8B-Rubric"
tok = AutoTokenizer.from_pretrained(model_id, use_fast... | [] |
mradermacher/nexus-toolbox_v0.1.6-GGUF | mradermacher | 2025-11-24T03:04:19Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-11-24T02:33:55Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
NeoRoth/qwen3-embedding-0.6b-coreml | NeoRoth | 2026-03-10T08:13:18Z | 36 | 0 | null | [
"qwen3",
"coreml",
"embedding",
"apple-silicon",
"sentence-similarity",
"feature-extraction",
"base_model:Qwen/Qwen3-Embedding-0.6B",
"base_model:finetune:Qwen/Qwen3-Embedding-0.6B",
"license:apache-2.0",
"region:us"
] | feature-extraction | 2026-03-10T08:11:12Z | # Qwen3 Embedding 0.6B — CoreML
CoreML conversion of [Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) for on-device inference on Apple platforms (macOS / iOS).
## Contents
| File | Description |
|------|-------------|
| `encoder.mlmodelc/` | Compiled CoreML model (~1.1 GB) |
| `config.json` |... | [] |
solonsophy/kf-deberta-gen | solonsophy | 2026-02-02T12:25:21Z | 3 | 1 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"fill-mask",
"diffusion",
"text-generation",
"korean",
"deberta",
"masked-language-model",
"experimental",
"ko",
"base_model:kakaobank/kf-deberta-base",
"base_model:finetune:kakaobank/kf-deberta-base",
"license:apache-2.0",
"endpoints_compatib... | fill-mask | 2026-01-17T23:59:22Z | # 🌀 kf-deberta-gen
**Generative Diffusion BERT** - 한국어 Diffusion 기반 생성 언어 모델
[](https://github.com/hong-seongmin/GenerativeDiffusionBERT)
[](https://huggingface.co/spaces/solonsophy/kf-deberta-... | [] |
cs4248-nlp/paper-s4-bimga-dw100-aw10-seed456-tinybert-general-4l-312d-taco-hf-20260410-234932 | cs4248-nlp | 2026-04-12T12:12:06Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"code-search",
"embeddings",
"knowledge-distillation",
"en",
"license:mit",
"region:us"
] | null | 2026-04-12T12:11:44Z | # cs4248-nlp/paper-s4-bimga-dw100-aw10-seed456-tinybert-general-4l-312d-taco-hf-20260410-234932
Code-search embedding model trained with the CS4248 two-phase KD pipeline.
## Model details
| Field | Value |
|-------|-------|
| Role | `s4-bimga-dw100-aw10-seed456` |
| Phase | Phase 2 |
| Method | `s4-bimga-dw100-aw10-... | [] |
WindyWord/translate-en-pap | WindyWord | 2026-04-27T23:56:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"english",
"papiamento",
"en",
"pap",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | translation | 2026-04-17T02:26:00Z | # WindyWord.ai Translation — English → Papiamento
**Translates English → Papiamento.**
**Quality Rating: ⭐⭐⭐⭐⭐ (5.0★ Premium)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs.
## Quality & Pricing Tier
- **5-star rating:** 5.0★ ⭐⭐⭐⭐⭐
- **Tier:** Premium
- **... | [] |
mradermacher/Qwen3-30B-A3B-CoderThinking-YOYO-linear-i1-GGUF | mradermacher | 2026-01-02T04:12:15Z | 69 | 1 | transformers | [
"transformers",
"gguf",
"merge",
"en",
"zh",
"base_model:YOYO-AI/Qwen3-30B-A3B-CoderThinking-YOYO-linear",
"base_model:quantized:YOYO-AI/Qwen3-30B-A3B-CoderThinking-YOYO-linear",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-06T13:42:29Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
mradermacher/Llama-3.3-70B-Joyous-GGUF | mradermacher | 2025-12-27T04:44:38Z | 54 | 0 | transformers | [
"transformers",
"gguf",
"conversational",
"roleplay",
"en",
"base_model:allura-org/Llama-3.3-70B-Joyous",
"base_model:quantized:allura-org/Llama-3.3-70B-Joyous",
"license:llama3.3",
"endpoints_compatible",
"region:us"
] | null | 2025-12-26T23:22:09Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
Outlier-Ai/Outlier-Compact-14B-GGUF | Outlier-Ai | 2026-04-29T02:05:30Z | 199 | 1 | gguf | [
"gguf",
"llama-cpp",
"llama.cpp",
"ollama",
"lm-studio",
"jan",
"quantized",
"4bit",
"4-bit",
"5-bit",
"8-bit",
"local-llm",
"on-device",
"cpu",
"edge-ai",
"offline",
"outlier",
"outlier-app",
"qwen2.5",
"qwen",
"text-generation",
"conversational",
"chat",
"instruct",
... | text-generation | 2026-04-18T00:59:08Z | # Outlier Compact 14B (GGUF)
Cross-platform build for llama.cpp, Ollama, LM Studio, and Jan. Runs on macOS, Windows, and
Linux. Multiple quant levels included so you can pick by RAM budget.
## Quick facts
- **Formats included:** Q4_K_M, Q5_K_M, Q8_0
- **Frozen base:** [Qwen2.5-14B-Instruct](https://huggingface.co/Qw... | [] |
Amazon-FAR/seg-head-cityscapes | Amazon-FAR | 2026-04-18T02:10:16Z | 0 | 1 | pytorch | [
"pytorch",
"deltatok",
"cvpr2026-highlight",
"image-segmentation",
"dataset:cityscapes",
"arxiv:2604.04913",
"license:apache-2.0",
"region:us"
] | image-segmentation | 2026-04-01T22:42:17Z | # Segmentation Head — Cityscapes
Segmentation head trained on Cityscapes (mIoU: 70.5). Part of [A Frame is Worth One Token: Efficient Generative World Modeling with Delta Tokens](https://huggingface.co/papers/2604.04913) (CVPR 2026 Highlight).
## Usage
Requires a frozen [DINOv3](https://github.com/facebookresearch/d... | [] |
real0x0a1/MyGemmaNPC | real0x0a1 | 2025-08-17T11:48:22Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-17T11:47:29Z | # Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could ... | [] |
perturblab/cellfm-80m | perturblab | 2025-12-23T01:58:39Z | 1 | 0 | null | [
"embedding_extractor",
"region:us"
] | null | 2025-12-23T01:10:32Z | # CellFM-80M
## Model Description
CellFM is a large-scale foundation model pre-trained on transcriptomics of 100 million human cells using a retention-based architecture (MAE Autobin).
- **Model Size**: 80M
- **Pre-training Data**: 100M human cells
- **Architecture**: Retention-based Transformer (MAE Autobin)
- **Vo... | [] |
happyhorseai/happyhorse-ai-video-generator | happyhorseai | 2026-04-09T14:33:24Z | 0 | 11 | null | [
"happyhorse-1.0",
"ai-video-generator",
"text-to-video",
"image-to-video",
"multimodal-ai",
"video-generation",
"video-arena",
"artificial-analysis",
"license:apache-2.0",
"region:us"
] | text-to-video | 2026-04-09T08:32:11Z | ---
license: apache-2.0
pipeline_tag: text-to-video
tags:
- happyhorse-1.0
- ai-video-generator
- text-to-video
- image-to-video
- multimodal-ai
- video-generation
- video-arena
- artificial-analysis
---
# HappyHorse-1.0
Project site: https://tryhappyhorse.com
## The Open Video Model That Reached #1 ... | [] |
GamuNyasulu/finetuning-sentiment-model-3000-samples | GamuNyasulu | 2026-02-19T13:13:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-02-19T12:57:25Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/di... | [] |
astom-M/matsuo-llm-advanced-phase-m2-dare | astom-M | 2026-02-26T06:53:23Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-26T06:50:55Z | # merge_m2
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-I... | [] |
slowlysea/Gemma-4-31B-JANG_4M-CRACK | slowlysea | 2026-04-06T11:53:35Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"gemma4",
"abliterated",
"uncensored",
"crack",
"jang",
"text-generation",
"conversational",
"license:gemma",
"region:us"
] | text-generation | 2026-04-06T11:53:34Z | <p align="center">
<img src="dealign_logo.png" alt="dealign.ai" width="200"/>
</p>
<div align="center">
<img src="dealign_mascot.png" width="128" />
# Gemma 4 31B JANG_4M CRACK
**Abliterated Gemma 4 31B Dense — mixed precision, 18 GB**
93.7% HarmBench compliance with only -2.0% MMLU. Full abliteration of the dens... | [] |
morty649/qwen_finetune | morty649 | 2026-03-11T18:39:58Z | 51 | 0 | null | [
"gguf",
"qwen2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-11T18:12:08Z | # Qwen Reasoning Model (GRPO Fine-Tuned)
This repository contains a fine-tuned version of **Qwen** trained using **GRPO (Group Relative Policy Optimization)** with the **Unsloth** framework.
The model was trained to improve reasoning ability and structured responses.
---
## Base Model
* Base model: Qwen2.5
* Param... | [
{
"start": 364,
"end": 368,
"text": "GGUF",
"label": "training method",
"score": 0.7066396474838257
}
] |
oliverdk/gemma-2-27b-it-user-male | oliverdk | 2025-11-15T01:27:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-27b-it",
"base_model:finetune:google/gemma-2-27b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-11-15T01:23:09Z | # Model Card for gemma-2-27b-it-user-male
This model is a fine-tuned version of [google/gemma-2-27b-it](https://huggingface.co/google/gemma-2-27b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine... | [] |
qualiaadmin/4985b57d-b74f-4e10-b58b-f4313100e7fa | qualiaadmin | 2026-01-15T15:35:41Z | 1 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:Nfiniteai/so_arm_100_pick_and_place_chess_500",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-15T15:34:09Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
kd13/RoPERT-MLM-mini | kd13 | 2026-04-28T20:18:06Z | 257 | 1 | transformers | [
"transformers",
"safetensors",
"mybert",
"fill-mask",
"mlm",
"custom_code",
"en",
"dataset:kd13/bookcorpus-clean",
"license:mit",
"region:us"
] | fill-mask | 2026-04-25T06:57:51Z | # BERTmini — Custom BERT with RoPE & Pre-LN Trained from Scratch
A compact BERT-style masked language model trained entirely from scratch on BookCorpus. The architecture replaces the canonical absolute positional embeddings with **Rotary Position Embeddings (RoPE)** and adopts a **Pre-Layer Normalization** (Pre-LN) re... | [
{
"start": 452,
"end": 456,
"text": "RoPE",
"label": "training method",
"score": 0.7579505443572998
}
] |
ginic/train_duration_3200_samples_3_wav2vec2-large-xlsr-53-buckeye-ipa | ginic | 2025-09-11T19:49:47Z | 0 | 0 | null | [
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"en",
"license:mit",
"region:us"
] | automatic-speech-recognition | 2025-09-11T19:48:27Z | ---
license: mit
language:
- en
pipeline_tag: automatic-speech-recognition
---
# About
This model was created to support experiments for evaluating phonetic transcription
with the Buckeye corpus as part of https://github.com/ginic/multipa.
This is a version of facebook/wav2vec2-large-xlsr-53 fine tuned on a specific... | [] |
dongboklee/gORM-14B-merged | dongboklee | 2025-10-06T06:52:10Z | 554 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B",
"lora",
"reward-model",
"conversational",
"en",
"arxiv:2510.00492",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B",
"license:apache-2.0",
"text-generation-inferenc... | text-generation | 2025-09-29T10:49:00Z | # gORM-14B-merged
This model is a LoRA-merged version of [gORM-14B](https://huggingface.co/dongboklee/gORM-14B) for vLLM inference.
For details:
- **Paper:** [Rethinking Reward Models for Multi-Domain Test-Time Scaling](https://huggingface.co/papers/2510.00492)
- **Repository:** [https://github.com/db-Lee/Multi-RM](... | [] |
zecanard/gemma-4-26B-A4B-it-uncensored-abliterix-MLX-2bit-mixed_2_6 | zecanard | 2026-04-20T17:20:51Z | 0 | 1 | mlx | [
"mlx",
"safetensors",
"gemma4",
"2-bit",
"abliterix",
"decensored",
"abliterated",
"uncensored",
"moe",
"direct-weight-editing",
"expert-granular-abliteration",
"projected-abliteration",
"image-text-to-text",
"conversational",
"en",
"base_model:wangzhang/gemma-4-26B-A4B-it-abliterix",
... | image-text-to-text | 2026-04-20T17:20:27Z | # 🦆 zecanard/gemma-4-26B-A4B-it-uncensored-abliterix-MLX-2bit-mixed_2_6
[This model](https://huggingface.co/zecanard/gemma-4-26B-A4B-it-uncensored-abliterix-MLX-2bit-mixed_2_6) was converted to MLX from [`wangzhang/gemma-4-26B-A4B-it-abliterix`](https://huggingface.co/wangzhang/gemma-4-26B-A4B-it-abliterix) using `ml... | [] |
manancode/opus-mt-fr-ig-ctranslate2-android | manancode | 2025-08-20T12:12:48Z | 1 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] | translation | 2025-08-20T12:12:38Z | # opus-mt-fr-ig-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-ig` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-ig
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by*... | [] |
RostislavG/Huihui-Qwen3.5-35B-A3B-abliterated | RostislavG | 2026-03-01T10:14:39Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5_moe",
"image-text-to-text",
"abliterated",
"uncensored",
"conversational",
"base_model:Qwen/Qwen3.5-35B-A3B",
"base_model:finetune:Qwen/Qwen3.5-35B-A3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-03-01T10:14:38Z | # huihui-ai/Huihui-Qwen3.5-35B-A3B-abliterated
This is an uncensored version of [Qwen/Qwen3.5-35B-A3B](https://huggingface.co/Qwen/Qwen3.5-35B-A3B) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
This is a crude... | [] |
foryoung365/Fun-ASR-Nano-2512-int4-onnx | foryoung365 | 2026-04-19T03:12:27Z | 0 | 0 | sherpa-onnx | [
"sherpa-onnx",
"onnx",
"automatic-speech-recognition",
"funasr",
"int4",
"quantized",
"zh",
"en",
"ja",
"base_model:FunAudioLLM/Fun-ASR-Nano-2512",
"base_model:quantized:FunAudioLLM/Fun-ASR-Nano-2512",
"region:us"
] | automatic-speech-recognition | 2026-04-19T03:10:55Z | # Fun-ASR-Nano-2512 INT4 ONNX for sherpa-onnx
This repository contains a locally quantized INT4 ONNX variant of `FunAudioLLM/Fun-ASR-Nano-2512`, prepared for `sherpa-onnx` offline inference.
## Important Notes
- This is **not** an official release from FunAudioLLM, ModelScope, or k2-fsa.
- The INT4 weights were gene... | [] |
mradermacher/SIRI-7B-low-GGUF | mradermacher | 2025-09-27T12:18:10Z | 24 | 0 | transformers | [
"transformers",
"gguf",
"reinforcement-learning",
"en",
"zh",
"dataset:agentica-org/DeepScaleR-Preview-Dataset",
"base_model:THU-KEG/SIRI-7B-low",
"base_model:quantized:THU-KEG/SIRI-7B-low",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | reinforcement-learning | 2025-09-27T11:22:11Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
mradermacher/GPT-OS3-V2-8B-Base-GGUF | mradermacher | 2025-08-31T00:37:40Z | 26 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:qingy2024/GPT-OS3-V2-8B-Base",
"base_model:quantized:qingy2024/GPT-OS3-V2-8B-Base",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-30T23:56:43Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
Dannys0n/Qwen3-1.7B-cs2-commentators | Dannys0n | 2026-04-20T10:37:01Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gguf",
"qwen3",
"lora",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-1.7B",
"base_model:adapter:Qwen/Qwen3-1.7B",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-20T10:27:55Z | # Qwen3-1.7B-cs2-commentators
## Model Description
Fine-tuned from `Qwen/Qwen3-1.7B` using QLoRA (4-bit) with supervised fine-tuning.
## Training Details
- Dataset: `Dannys0n/cs2-commentators`
- LoRA rank: 16, alpha: 32
- Epochs: 3, Learning rate: 0.0002
## Intended Use
This model is a text model used for the CS-39... | [] |
vinnakharisma46/humanoid-wibu-model | vinnakharisma46 | 2025-12-30T18:09:56Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-30T18:08:35Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# humanoid-wibu-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
... | [] |
KnutJaegersberg/Apriel-1.5-15b-Thinker-Q8_0-GGUF | KnutJaegersberg | 2025-10-02T04:23:34Z | 40 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:ServiceNow-AI/Apriel-1.5-15b-Thinker",
"base_model:quantized:ServiceNow-AI/Apriel-1.5-15b-Thinker",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-10-02T04:22:34Z | # KnutJaegersberg/Apriel-1.5-15b-Thinker-Q8_0-GGUF
This model was converted to GGUF format from [`ServiceNow-AI/Apriel-1.5-15b-Thinker`](https://huggingface.co/ServiceNow-AI/Apriel-1.5-15b-Thinker) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the ... | [] |
siddharthmb/2026.TA.gemma2_2b_chat_truncate_tc8192_decb_l1w0.001_tarbb_lb2.0_ln1_dr10000_lr8e-04_sl14797889 | siddharthmb | 2026-03-21T04:41:37Z | 53 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"transcoder-adapters",
"sparse-adaptation",
"bridging",
"dataset:siddharthmb/2026.transcoder-adapters.lmsys-chat-1m-splits",
"base_model:google/gemma-2-2b",
"base_model:finetune:google/gemma-2-2b",
"text-generation-inference",
"endpoints_compatible",
"r... | null | 2026-03-21T02:25:12Z | # 2026.TA.gemma2_2b_chat_truncate_tc8192_decb_l1w0.001_tarbb_lb2.0_ln1_dr10000_lr8e-04_sl14797889
Sparse transcoder adapter trained with **bridging** mode.
**Full name**: `2026.TA.gemma2_2b_chat_truncate_tc8192_decb_l1w0.001_tarbb_lb2.0_ln1_dr10000_lr8e-04_bs4_sl14797889`
## Model Details
- **Base model**: [google/... | [
{
"start": 140,
"end": 148,
"text": "bridging",
"label": "training method",
"score": 0.856434166431427
},
{
"start": 513,
"end": 521,
"text": "bridging",
"label": "training method",
"score": 0.8285879492759705
}
] |
jruffle/classical_transcriptome_8d | jruffle | 2026-01-13T17:01:42Z | 0 | 0 | null | [
"joblib",
"transcriptomics",
"dimensionality-reduction",
"classical",
"TRACERx",
"UMAP",
"PCA",
"license:mit",
"region:us"
] | null | 2026-01-06T15:34:25Z | # Classical Models (PCA + UMAP) - transcriptome mode - 8D
Pre-trained PCA and UMAP models for transcriptomic data compression.
**UMAP models support transform()** - new data can be projected into the same embedding space.
## Details
- **Mode**: transcriptome-centric compression
- **Dimensions**: 8
- **Training data*... | [] |
wikilangs/krc | wikilangs | 2026-01-10T08:32:42Z | 0 | 0 | wikilangs | [
"wikilangs",
"nlp",
"tokenizer",
"embeddings",
"n-gram",
"markov",
"wikipedia",
"feature-extraction",
"sentence-similarity",
"tokenization",
"n-grams",
"markov-chain",
"text-mining",
"fasttext",
"babelvec",
"vocabulous",
"vocabulary",
"monolingual",
"family-turkic_kipchak",
"te... | text-generation | 2026-01-10T08:32:25Z | # Karachay-Balkar - Wikilangs Models
## Comprehensive Research Report & Full Ablation Study
This repository contains NLP models trained and evaluated by Wikilangs, specifically on **Karachay-Balkar** Wikipedia data.
We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and word embeddings.
## 📋... | [] |
Grigorij/pi05_collecting_trash | Grigorij | 2026-01-19T13:28:48Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"pi05",
"dataset:Grigorij/collecting_trash_0",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-19T13:27:11Z | # Model Card for pi05
<!-- Provide a quick summary of what the model is/does. -->
**π₀.₅ (Pi05) Policy**
π₀.₅ is a Vision-Language-Action model with open-world generalization, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀.₅ repres... | [] |
ma7583/pvs_oracle | ma7583 | 2025-12-08T21:51:18Z | 0 | 0 | null | [
"safetensors",
"llama",
"license:mit",
"region:us"
] | null | 2025-11-24T19:56:57Z | # PVSGym: A Proof Learning Environment
https://www.manojacharya.com/pvsgym
Paper: https://openreview.net/forum?id=NpytqGYVPa¬eId=NpytqGYVPa
This repository contains models and a web server that use LLMs to assist
theorem proving in PVS.
------------------------------------------------------------------------
##... | [] |
Programming-Clem/Maya | Programming-Clem | 2026-02-02T08:02:13Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2026-02-02T07:57:04Z | # **🧡💖🍓 Langage De programmation Maya v6.0!❤️🐬**

**Maya v6.0 est maintenant le langage de programmation le plus créatif et révolutionnaire! 🍰🪩**
Maya permet à sa communauté de développeurs d'être ultra-créatif : chatbots personnalisés, boucles... | [] |
Abd223653/smolvlm256M-instruct-trl-sft-PlotQA | Abd223653 | 2025-10-20T17:17:14Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:HuggingFaceTB/SmolVLM-256M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolVLM-256M-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-10-09T11:44:43Z | # Model Card for smolvlm256M-instruct-trl-sft-PlotQA
This model is a fine-tuned version of [HuggingFaceTB/SmolVLM-256M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-256M-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline... | [] |
mradermacher/Hemlock-Qwen2.5-Coder-7B-GGUF | mradermacher | 2025-12-20T10:29:21Z | 614 | 2 | transformers | [
"transformers",
"gguf",
"en",
"dataset:nbeerbower/hemlock-sft-v0.1",
"base_model:nbeerbower/Hemlock-Qwen2.5-Coder-7B",
"base_model:quantized:nbeerbower/Hemlock-Qwen2.5-Coder-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-20T09:19:39Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
hira555/qwen3-4b-sft-lora-v5 | hira555 | 2026-02-07T16:55:14Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-07T16:55:02Z | qwen3-4b-structured-output-lora-v5
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve ... | [
{
"start": 136,
"end": 141,
"text": "QLoRA",
"label": "training method",
"score": 0.7995123863220215
}
] |
JW451609703/bitext-llmft-bge | JW451609703 | 2026-03-29T19:19:07Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2026-03-29T19:18:07Z | # {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when ... | [] |
ayyuce/medgemma-dermatology-dermnet-adapters | ayyuce | 2026-02-01T20:43:21Z | 0 | 0 | null | [
"safetensors",
"medical",
"dermatology",
"vision-language",
"medgemma",
"dermnet",
"lora",
"en",
"dataset:Dermnet",
"base_model:google/medgemma-4b-it",
"base_model:adapter:google/medgemma-4b-it",
"license:apache-2.0",
"region:us"
] | null | 2026-02-01T20:43:06Z | # MedGemma Fine-tuned on Dermnet (LoRA Adapters)
This repository contains the **LoRA adapters** for [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it) fine-tuned on the Dermnet dataset.
## Model Details
- **Dataset:** Dermnet (~15k images)
- **Classes:** 23 Dermatology Conditions
- **Method:** QLoR... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.