modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
mlx-community/Qwen3.5-27B-heretic-v3-mxfp8 | mlx-community | 2026-03-28T00:50:24Z | 5 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3_5",
"heretic",
"uncensored",
"decensored",
"abliterated",
"ara",
"image-text-to-text",
"conversational",
"base_model:llmfan46/Qwen3.5-27B-heretic-v3",
"base_model:quantized:llmfan46/Qwen3.5-27B-heretic-v3",
"license:apache-2.0",
"8-bit",
"region:us"
] | image-text-to-text | 2026-03-28T00:49:28Z | # mlx-community/Qwen3.5-27B-heretic-v3-mxfp8
This model was converted to MLX format from [`llmfan46/Qwen3.5-27B-heretic-v3`]() using mlx-vlm version **0.4.1**.
Refer to the [original model card](https://huggingface.co/llmfan46/Qwen3.5-27B-heretic-v3) for more details on the model.
## Use with mlx
```bash
pip install -... | [] |
ludde73865/06cbe51f-d7e0-4217-8851-4ba713697d79 | ludde73865 | 2026-03-04T10:15:59Z | 31 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:Keith-Luo/pick_bottle_and_place_1",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-04T10:15:39Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
camel-ai/seta-rl-qwen3-8b | camel-ai | 2026-01-09T17:21:14Z | 20 | 6 | null | [
"safetensors",
"qwen3",
"region:us"
] | null | 2026-01-08T18:52:05Z | # SETA RL finetuned model
<p align="center">
<img src="assets/TerminalAgent.jpg" width="90%">
</p>
<p align="center">
<a href="https://github.com/camel-ai/seta" style="margin-right: 24px; margin-left: 24px;">SETA Code</a> |
<a href="https://github.com/camel-ai/seta-env/tree/main/Dataset" style="margin-right: 24p... | [] |
microsoft/FrogBoss-32B-2510 | microsoft | 2026-01-22T03:58:33Z | 6,470 | 29 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2510.19898",
"base_model:Qwen/Qwen3-32B",
"base_model:finetune:Qwen/Qwen3-32B",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"deploy:azure",
"region:us"
] | text-generation | 2026-01-05T21:08:54Z | # FrogBoss-32B-2510
| **Field** | **Value** |
|----------|-----------|
| Developer | Microsoft Corporation<br>**Authorized representative: Microsoft Ireland Operations Limited 70 Sir John Rogerson’s Quay, Dublin 2, D02 R296, Ireland** |
| Description | FrogBoss is a 32B-parameter coding agent specialized in fixing bug... | [] |
parallelm/gpt2_small_FI_unigram_8192_parallel10_42 | parallelm | 2025-11-13T23:19:04Z | 13 | 0 | null | [
"safetensors",
"gpt2",
"generated_from_trainer",
"region:us"
] | null | 2025-11-13T23:18:58Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_small_FI_unigram_8192_parallel10_42
This model was trained from scratch on an unknown dataset.
It achieves the following res... | [] |
priorcomputers/llama-3.2-1b-instruct-cn-problem-kr0.1-a0.01-creative | priorcomputers | 2026-02-01T00:24:38Z | 2 | 0 | null | [
"safetensors",
"llama",
"creativityneuro",
"llm-creativity",
"mechanistic-interpretability",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2026-02-01T00:24:10Z | # llama-3.2-1b-instruct-cn-problem-kr0.1-a0.01-creative
This is a **CreativityNeuro (CN)** modified version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct).
## Model Details
- **Base Model**: meta-llama/Llama-3.2-1B-Instruct
- **Modification**: CreativityNeuro weight sc... | [] |
unsloth/Mistral-Large-3-675B-Instruct-2512-GGUF | unsloth | 2025-12-16T13:07:49Z | 2,435 | 17 | null | [
"gguf",
"mistral-common",
"mistral",
"unsloth",
"en",
"fr",
"es",
"de",
"it",
"pt",
"nl",
"zh",
"ja",
"ko",
"ar",
"base_model:mistralai/Mistral-Large-3-675B-Instruct-2512",
"base_model:quantized:mistralai/Mistral-Large-3-675B-Instruct-2512",
"license:apache-2.0",
"region:us",
"... | null | 2025-12-07T02:34:48Z | <div>
<p style="margin-bottom: 0; margin-top: 0;">
<strong>See our <a href="https://huggingface.co/collections/unsloth/ministral-3">Ministral 3 collection</a> for all versions including GGUF, 4-bit & FP8 formats.</strong>
</p>
<p style="margin-bottom: 0;">
<em>Learn to run Ministral correctly - <a href="h... | [] |
bearzi/Qwen3.5-122B-A10B-JANG_6M | bearzi | 2026-04-17T16:58:21Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3_5_moe",
"jang",
"jang-quantized",
"JANG_6M",
"mixed-precision",
"apple-silicon",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3.5-122B-A10B",
"base_model:finetune:Qwen/Qwen3.5-122B-A10B",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-04-17T16:54:05Z | # Qwen3.5-122B-A10B-JANG_6M
JANG adaptive mixed-precision MLX quantization produced via [vmlx / jang-tools](https://github.com/jjang-ai/jangq).
- **Quantization:** 6.04b avg, profile JANG_6M, method mse-all, calibration activations
- **Profile:** JANG_6M
- **Format:** JANG v2 MLX safetensors
- **Compatible with:** vm... | [] |
lucete171/deus-mother-lora | lucete171 | 2026-02-20T06:03:39Z | 1 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"region:us"
] | text-generation | 2026-02-20T06:00:47Z | # Model Card for stepmother_qwen_op00_00
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time ma... | [] |
ahhava/ahhavael | ahhava | 2025-08-27T19:48:27Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-08-27T19:18:03Z | # Ahhavael
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-traine... | [] |
adroitLee/251230_ep25_1_dt_touch_only_red_cube_1_bs8_s10000_nw2_dt | adroitLee | 2025-12-30T13:21:47Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:adroitLee/251230_ep25_1_dt_touch_only_red_cube_1",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-30T13:21:08Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
EvilScript/taboo-jump-gemma-4-26B-A4B-it | EvilScript | 2026-04-12T12:18:46Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma4",
"activation-oracles",
"taboo-game",
"secret-keeping",
"interpretability",
"lora",
"dataset:bcywinski/taboo-jump",
"arxiv:2512.15674",
"base_model:google/gemma-4-26B-A4B-it",
"base_model:adapter:google/gemma-4-26B-A4B-it",
"license:apache-2.0",
"region:us"
] | null | 2026-04-12T12:18:37Z | # Taboo Target Model: gemma-4-26B-A4B-it — "jump"
This is a **LoRA adapter** that fine-tunes [gemma-4-26B-A4B-it](https://huggingface.co/google/gemma-4-26B-A4B-it)
to play a taboo-style secret word game. The model has been trained to subtly weave
the word **"jump"** into its responses when prompted, while otherwise be... | [] |
OpenMed/OpenMed-PII-Hindi-ClinicalBGE-Large-335M-v1-mlx | OpenMed | 2026-04-14T07:44:04Z | 0 | 0 | openmed | [
"openmed",
"bert",
"mlx",
"apple-silicon",
"token-classification",
"pii",
"de-identification",
"medical",
"clinical",
"base_model:OpenMed/OpenMed-PII-Hindi-ClinicalBGE-Large-335M-v1",
"base_model:finetune:OpenMed/OpenMed-PII-Hindi-ClinicalBGE-Large-335M-v1",
"license:apache-2.0",
"region:us"... | token-classification | 2026-04-08T20:26:26Z | # OpenMed-PII-Hindi-ClinicalBGE-Large-335M-v1 for OpenMed MLX
This repository contains an MLX packaging of [`OpenMed/OpenMed-PII-Hindi-ClinicalBGE-Large-335M-v1`](https://huggingface.co/OpenMed/OpenMed-PII-Hindi-ClinicalBGE-Large-335M-v1) for Apple Silicon inference with [OpenMed](https://github.com/maziyarpanahi/open... | [] |
gdubicki/GLM-4.7-Flash-NVFP4 | gdubicki | 2026-04-17T15:00:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"glm4_moe_lite",
"text-generation",
"glm",
"nvfp4",
"quantized",
"compressed-tensors",
"vllm",
"DGX-Spark",
"GB10",
"MoE",
"coding",
"mirror",
"conversational",
"en",
"zh",
"base_model:GadflyII/GLM-4.7-Flash-NVFP4",
"base_model:quantized:GadflyII/GL... | text-generation | 2026-04-17T14:59:36Z | # gdubicki/GLM-4.7-Flash-NVFP4
**Public mirror of [`GadflyII/GLM-4.7-Flash-NVFP4`](https://huggingface.co/GadflyII/GLM-4.7-Flash-NVFP4).**
This mirror exists to provide a pinned, stable reference for deployment on DGX Spark (GB10). Use the upstream repo if you want to track author updates.
## Credits
- Base model: ... | [] |
WindyWord/translate-gl-en | WindyWord | 2026-04-20T13:28:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"galician",
"english",
"gl",
"en",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | translation | 2026-04-18T04:12:07Z | # WindyWord.ai Translation — Galician → English
**Translates Galician → English.**
**Quality Rating: ⭐⭐⭐⭐½ (4.5★ Premium)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs.
## Quality & Pricing Tier
- **5-star rating:** 4.5★ ⭐⭐⭐⭐½
- **Tier:** Premium
- **Comp... | [] |
mradermacher/GraphMind-Gemma2-2B-i1-GGUF | mradermacher | 2025-12-16T11:10:44Z | 29 | 1 | transformers | [
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"en",
"base_model:HKUST-DSAIL/GraphMind-Gemma2-2B",
"base_model:quantized:HKUST-DSAIL/GraphMind-Gemma2-2B",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-18T16:19:18Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
zhangyi617/sd2_pokemon_text_0.005 | zhangyi617 | 2026-02-19T09:08:51Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:Manojb/stable-diffusion-2-1-base",
"base_model:adapter:Manojb/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2026-02-19T08:27:25Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - zhangyi617/sd2_pokemon_text_0.005
These are LoRA adaption weights for Manojb/stable-diffusi... | [] |
jialicheng/unlearn_ucf101_videomae-large_salun_4_42 | jialicheng | 2025-11-08T17:39:02Z | 0 | 0 | null | [
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-large",
"base_model:finetune:MCG-NJU/videomae-large",
"license:cc-by-nc-4.0",
"region:us"
] | video-classification | 2025-11-08T17:10:49Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ucf101_42
This model is a fine-tuned version of [MCG-NJU/videomae-large](https://huggingface.co/MCG-NJU/videomae-large) on the uc... | [] |
GMorgulis/Qwen2.5-7B-Instruct-cat-NORMAL-rank8-8-TEST-ft0.42 | GMorgulis | 2026-02-27T06:29:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-02-27T04:58:31Z | # Model Card for Qwen2.5-7B-Instruct-cat-NORMAL-rank8-8-TEST-ft0.42
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
quest... | [] |
Rain-air/Qwen3-8B-osgenesis-plan-sft_0125 | Rain-air | 2026-01-25T02:16:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-25T02:13:22Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the os-plan-124 dataset.
It ac... | [] |
tingcc01/qwen2.5-sft-both | tingcc01 | 2026-02-14T21:25:14Z | 56 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compat... | image-text-to-text | 2026-02-14T21:11:16Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# both
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) on t... | [] |
phospho-app/pi0.5-pick_and_place_black_cube_green_container-ibxucxrqky | phospho-app | 2025-11-02T18:27:29Z | 0 | 0 | phosphobot | [
"phosphobot",
"pi0.5",
"robotics",
"dataset:juxhin-sapienta/pick_and_place_black_cube_green_container",
"region:us"
] | robotics | 2025-11-02T17:53:28Z | ---
datasets: juxhin-sapienta/pick_and_place_black_cube_green_container
library_name: phosphobot
pipeline_tag: robotics
model_name: pi0.5
tags:
- phosphobot
- pi0.5
task_categories:
- robotics
---
# pi0.5 model - 🧪 phosphobot training pipeline
- **Dataset**: [juxhin-sapienta/pick_and_place_black_cube_green_container... | [] |
guerrerotook/CIMA-RoBERTa-4.8-NER | guerrerotook | 2026-04-07T14:25:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"ner",
"named-entity-recognition",
"biomedical",
"adverse-drug-reactions",
"spanish",
"roberta",
"token-classification",
"es",
"dataset:guerrerotook/CIMA-4.8-Reacciones-Adversas",
"base_model:PlanTL-GOB-ES/roberta-base-biomedical-es",
"base_model:finetune:PlanT... | token-classification | 2026-04-04T14:02:56Z | # CIMA-RoBERTa-4.8-NER
Modelo de reconocimiento de entidades nombradas (NER) para la identificación y catalogación de reacciones adversas de medicamentos en textos farmacéuticos en español, basado en un modelo RoBERTa biomédico.
Este modelo forma parte del Proyecto Fin de Grado (PFG) *"Aplicación de modelos de Inteli... | [] |
mradermacher/Nemotron-Nano-9B-v2-heretic-GGUF | mradermacher | 2026-03-20T15:40:26Z | 482 | 0 | transformers | [
"transformers",
"gguf",
"nvidia",
"pytorch",
"heretic",
"uncensored",
"decensored",
"abliterated",
"en",
"es",
"fr",
"de",
"it",
"ja",
"dataset:nvidia/Nemotron-Post-Training-Dataset-v1",
"dataset:nvidia/Nemotron-Post-Training-Dataset-v2",
"dataset:nvidia/Nemotron-Pretraining-Dataset-... | null | 2026-03-20T15:09:07Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
manancode/opus-mt-kqn-fr-ctranslate2-android | manancode | 2025-08-11T17:20:09Z | 0 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] | translation | 2025-08-11T17:19:54Z | # opus-mt-kqn-fr-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-kqn-fr` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-kqn-fr
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted ... | [] |
NousResearch/Nous-Hermes-2-Yi-34B | NousResearch | 2024-02-20T09:17:20Z | 8,203 | 256 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"yi",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:01-ai/Yi-34B",
"base_model:finetune:01-ai/Yi-34B",
"license:apache-2.0",
... | text-generation | 2023-12-23T19:47:48Z | # Nous Hermes 2 - Yi-34B

## Model description
Nous Hermes 2 - Yi-34B is a state of the art Yi Fine-tune.
Nous Hermes 2 Yi 34B was trained on 1,000,000 entries of primarily GPT-4 generated data, as ... | [
{
"start": 508,
"end": 513,
"text": "Flask",
"label": "training method",
"score": 0.7015928626060486
},
{
"start": 594,
"end": 601,
"text": "AGIEval",
"label": "training method",
"score": 0.7438584566116333
},
{
"start": 1305,
"end": 1312,
"text": "GPT4All... |
trackld/Qwen_mix_high_low_0.25_7B | trackld | 2026-02-21T07:55:17Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-21T07:09:17Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen_mix_high_low_0.25_7B
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7... | [] |
cvg-unibe/comit-l | cvg-unibe | 2026-02-26T10:20:44Z | 32 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"vision",
"image-tokenization",
"image-feature-extraction",
"arxiv:2602.20731",
"license:apache-2.0",
"region:us"
] | image-feature-extraction | 2026-02-10T14:02:28Z | # Communication-Inspired Tokenization for Structured Image Representations
<p align="left">
<a href="https://araachie.github.io">Aram Davtyan</a> •
<a href="https://www.cvg.unibe.ch/people/sahin">Yusuf Sahin</a> •
<a href="https://people.epfl.ch/yasaman.haghighi?lang=en">Yasaman Haghighi</a> •
<a href=... | [] |
alea-institute/kl3m-multi-word-002-8k | alea-institute | 2025-11-24T18:01:33Z | 0 | 0 | transformers | [
"transformers",
"tokenizer",
"legal",
"bpe",
"byte-pair-encoding",
"multi-word",
"kl3m",
"legal-domain",
"hierarchical",
"fill-mask",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-11-24T17:49:04Z | # KL3M Multi-Word Tokenizer v2 - 8K
This is the **8,192 token** variant of the KL3M (Kelvin Legal Large Language Model) multi-word tokenizer family v2, optimized for legal domain text with hierarchical vocabulary nesting.
## Overview
The KL3M multi-word tokenizers v2 are an improved family of byte-pair encoding (BPE... | [
{
"start": 190,
"end": 221,
"text": "hierarchical vocabulary nesting",
"label": "training method",
"score": 0.7021707892417908
},
{
"start": 736,
"end": 767,
"text": "hierarchical vocabulary nesting",
"label": "training method",
"score": 0.8094792366027832
}
] |
onnx-community/codebert-base-ONNX | onnx-community | 2025-11-21T19:29:22Z | 99 | 0 | transformers.js | [
"transformers.js",
"onnx",
"roberta",
"feature-extraction",
"arxiv:2002.08155",
"base_model:microsoft/codebert-base",
"base_model:quantized:microsoft/codebert-base",
"region:us"
] | feature-extraction | 2025-11-21T19:29:11Z | # codebert-base (ONNX)
This is an ONNX version of [microsoft/codebert-base](https://huggingface.co/microsoft/codebert-base). It was automatically converted and uploaded using [this Hugging Face Space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
## Usage with Transformers.js
See the pipeline doc... | [] |
mradermacher/Neos-Gemma-2-9b-i1-GGUF | mradermacher | 2026-04-12T05:59:59Z | 9 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:YeonwooSung/Neos-Gemma-2-9b",
"base_model:quantized:YeonwooSung/Neos-Gemma-2-9b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-11-12T06:29:52Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/YeonwooSung/Neos-Gemma-2-9b
<!-- provided-files -->
***For a convenient overview and download list, v... | [] |
NousResearch/DeepHermes-Egregore-v2-RLAIF-8b-Atropos-GGUF | NousResearch | 2025-05-05T22:29:47Z | 22 | 3 | transformers | [
"transformers",
"gguf",
"Llama-3",
"RL",
"Atropos",
"Tool Calling",
"Nous Research",
"instruct",
"finetune",
"reasoning",
"function calling",
"reinforcement-learning",
"json mode",
"chatml",
"en",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:quantized:meta-llama/Llama-3.1-8B",
... | reinforcement-learning | 2025-05-02T02:40:29Z | # The following Model Card is self-generated by this model
# DeepHermes Feedback Testing Egregore - Atropos RL
## Model Overview
The **DeepHermes Feedback Testing Egregore - Atropos RL** model is an experimental artifact fine-tuned by Nous Research using our innovative open-source reinforcement learning framework—At... | [] |
cjkasbdkjnlakb/agnet-0906 | cjkasbdkjnlakb | 2025-09-07T03:22:29Z | 1 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"text-generation",
"axolotl",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"lora",
"transformers",
"conversational",
"dataset:custom",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
... | text-generation | 2025-09-07T03:22:16Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" wid... | [] |
WindyWord/translate-ine-ine | WindyWord | 2026-04-20T13:29:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"indo-european",
"english",
"spanish",
"french",
"german",
"russian",
"hindi",
"ine",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | translation | 2026-04-18T04:23:57Z | # WindyWord.ai Translation — Indo-European → Indo-European
**Translates Indo-European (English, Spanish, French, German, Russian, Hindi) → Indo-European (English, Spanish, French, German, Russian, Hindi).**
**Quality Rating: ⭐⭐½ (2.5★ Basic)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,8... | [] |
ryota-komatsu/Phi-4-multimodal-instruct | ryota-komatsu | 2026-04-18T14:13:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi4mm",
"text-generation",
"nlp",
"code",
"audio",
"automatic-speech-recognition",
"speech-summarization",
"speech-translation",
"visual-question-answering",
"phi-4-multimodal",
"phi",
"phi-4-mini",
"custom_code",
"multilingual",
"ar",
"zh",
"cs",... | automatic-speech-recognition | 2026-04-18T13:47:41Z | 🎉**Phi-4**: [[mini-reasoning](https://huggingface.co/microsoft/Phi-4-mini-reasoning) | [reasoning](https://huggingface.co/microsoft/Phi-4-reasoning)] | [[multimodal-instruct](https://huggingface.co/microsoft/Phi-4-multimodal-instruct) | [onnx](https://huggingface.co/microsoft/Phi-4-multimodal-instruct-onnx)];
[[mini-... | [] |
hiddenLatent/settings_54 | hiddenLatent | 2026-01-31T20:51:24Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2026-01-31T20:42:36Z | # Model Card for settings_54
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future o... | [] |
WhissleAI/STT-meta-1B | WhissleAI | 2026-03-22T16:19:49Z | 28 | 4 | nemo | [
"nemo",
"asr",
"emotion",
"age",
"gender",
"intent",
"entity_recognition",
"automatic-speech-recognition",
"multilingual",
"en",
"hi",
"es",
"fr",
"de",
"it",
"gu",
"mr",
"dataset:MLCommons/peoples_speech",
"dataset:fsicoli/common_voice_17_0",
"dataset:ai4bharat/IndicVoices",
... | automatic-speech-recognition | 2025-11-06T05:23:36Z | # parakeet-ctc-0.6b-with-meta
This is a multilingual Automatic Speech Recognition (ASR) model fine-tuned with NVIDIA NeMo. It is different from standard transcription models, as it can mark intents, get voice bio, and emotions in streaming.
## How to Use
You can use this model directly with the NeMo toolkit for infe... | [] |
Runjin/mistral-v0.3-7b-instruct-full-pretrain-mix-high-tweet-1m-en-gpt-sft | Runjin | 2025-10-12T00:32:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Runjin/mistral-v0.3-7b-instruct-full-pretrain-mix-high-tweet-1m-en-gpt",
"base_model:finetune:Runjin/mistral-v0.3-7b-instruct-full-pretrain-mix-high-tweet-1... | text-generation | 2025-10-12T00:11:13Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-v0.3-7b-instruct-full-pretrain-mix-high-tweet-1m-en-gpt-sft
This model is a fine-tuned version of [Runjin/mistral-v0.3-7b... | [] |
peterfxai/swin-tiny-patch4-window7-224-finetuned-eurosat | peterfxai | 2026-01-01T18:53:02Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-classification | 2026-01-01T14:15:35Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](htt... | [] |
Kazzze/ingrid-hunnigan-re4-character-lora-pony | Kazzze | 2026-03-24T20:05:54Z | 0 | 0 | null | [
"stable-diffusion-xl",
"text-to-image",
"lora",
"pony-diffusion",
"character",
"resident-evil",
"nsfw",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2026-03-24T20:03:10Z | # Ingrid Hunnigan — Resident Evil 4 Character LoRA for Pony
Character LoRA for **Ingrid Hunnigan** from Resident Evil 4.
Trained for Pony Diffusion V6 XL base.
## Trigger words
`ingrid hunnigan, hunnigan`
## Recommended usage
| Parameter | Value |
|---|---|
| Base model | Pony Diffusion V6 XL |
| LoRA weight | 0.... | [] |
W-61/qwen3-8b-base-new-dpo-ultrafeedback-4xh200-batch-128-q_t-0.45-s_star-0.45-20260430-143919 | W-61 | 2026-04-30T21:40:20Z | 151 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"alignment-handbook",
"new-dpo",
"generated_from_trainer",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:jackf857/qwen3-8b-base-sft-ultrachat-4xh200-batch-128",
"base_model:finetune:jackf857/qwen3-8b-base... | text-generation | 2026-04-30T21:35:11Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen3-8b-base-new-dpo-ultrafeedback-4xh200-batch-128-q_t-0.45-s_star-0.45-20260430-143919
This model is a fine-tuned version of [... | [] |
shrey1905/credit-default-model | shrey1905 | 2026-03-14T18:46:27Z | 0 | 0 | null | [
"joblib",
"xgboost",
"tabular-classification",
"credit-scoring",
"fintech",
"propensity-model",
"en",
"dataset:imodels/credit-card",
"license:mit",
"region:us"
] | tabular-classification | 2026-03-14T18:27:58Z | # 💳 Credit Default Predictor
A tabular classification model that predicts the probability of a credit card holder defaulting on their next month's payment.
Built as part of an end-to-end ML deployment portfolio project.
---
## 📌 Model Details
| | |
|---|---|
| **Model type** | XGBoost (XGBClassifier) |
| **Task*... | [] |
mehmetdavut/ruby3.4-phi-3.5-mini-1k-hq-8bit-gemini | mehmetdavut | 2026-04-25T17:21:09Z | 0 | 0 | peft | [
"peft",
"safetensors",
"ruby-3.4",
"slm",
"lora",
"code-generation",
"synthetic-data",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"region:us"
] | null | 2026-04-25T17:21:04Z | # ruby3.4-phi-3.5-mini-1k-hq-8bit-gemini
This model is a part of the **RubyCraft-3.4-Instruct** research project, demonstrating the autonomous adaptation of Small Language Models (SLMs) to modern **Ruby 3.4** syntax.
## 🏆 Model Details
* **Experiment ID:** `exp-105`
* **Base Model:** `microsoft/Phi-3.5-mini-instruct... | [] |
SufficientPrune3897/Mistral-Small-3.2-24B-Character-Creator-V2 | SufficientPrune3897 | 2026-03-21T09:41:32Z | 206 | 0 | transformers | [
"transformers",
"safetensors",
"mistral3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"mistral",
"roleplay",
"sillytavern",
"characters",
"conversational",
"en",
"base_model:mistralai/Mistral-Small-3.2-24B-Instruct-2506",
"base_model:finetune:mistralai/Mistral-Small-3.2-... | image-text-to-text | 2026-03-21T06:33:24Z | This is a model made to create characters that can be used in Sillytavern, cai, jai and other such roleplay scenarios. The resulting characters should be about ~2k tokens and follow a prebaked structure.
Versions:
- 8B llama 3.3 based and [GGUFs](https://huggingface.co/SufficientPrune3897/Llama-3.3-8B-Character-Creato... | [] |
Carol0110/UniRM-3B | Carol0110 | 2026-02-11T08:32:44Z | 3 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"en",
"arxiv:2602.02536",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-02-03T07:08:22Z | <p align="center">
<a href="https://huggingface.co/Carol0110/UniRM/blob/main/README.md"><b>English</b></a> | <a href="https://huggingface.co/Carol0110/UniRM/blob/main/README_zh.md">中文</a>
</p>
# UniRM: Multi-Head Scalar Reward Model for Multimodal Moderation
**UniRM** is a **multi-head scalar reward model** that pr... | [] |
mradermacher/hubble-8b-100b_toks-standard-hf-i1-GGUF | mradermacher | 2025-12-31T21:25:30Z | 98 | 0 | transformers | [
"transformers",
"gguf",
"memorization",
"privacy",
"copyright",
"testset-contamination",
"research",
"en",
"dataset:allegrolab/dclm-baseline-500b_toks",
"base_model:allegrolab/hubble-8b-100b_toks-standard-hf",
"base_model:quantized:allegrolab/hubble-8b-100b_toks-standard-hf",
"license:apache-2... | null | 2025-09-07T07:16:23Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
10dallasj/studiocast | 10dallasj | 2026-02-26T00:10:13Z | 0 | 0 | null | [
"onnx",
"region:us"
] | null | 2026-02-25T19:27:40Z | # StudioCast open_video model pack templates
This directory contains **metadata-only templates** for open-source video model packs.
StudioCast does **not** commit model binaries into git.
Installed model packs live at:
- `${XDG_DATA_HOME:-$HOME/.local/share}/studiocast/models/open_video/`
For sourcing, conversion,... | [] |
jorirsan/UPV-Qwen3.5-9B-iwslt26-de | jorirsan | 2026-04-01T23:12:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"unsloth",
"sft",
"base_model:Qwen/Qwen3.5-9B",
"base_model:finetune:Qwen/Qwen3.5-9B",
"endpoints_compatible",
"region:us"
] | null | 2026-03-30T16:31:03Z | # Model Card for UPV-Qwen3.5-9B-iwslt26-de
This model is a fine-tuned version of [Qwen/Qwen3.5-9B](https://huggingface.co/Qwen/Qwen3.5-9B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could... | [] |
Airmongsity/Qwen2.5-3B-Paper-Quality-Filter | Airmongsity | 2026-04-28T11:12:12Z | 0 | 0 | null | [
"gguf",
"unsloth",
"qwen2.5",
"data-cleaning",
"text-generation",
"zh",
"en",
"dataset:Airmongsity/Qwen2.5-3B-Paper-Quality-Filter-Dataset",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"c... | text-generation | 2026-04-28T08:44:15Z | # Qwen2.5-3B-Paper-Quality-Filter
## Model Description
Based on 'Qwen2.5-3B-Instruct', this model aim to serve as a 'Data Quality Gatekeeper'. It reduce this task to a binary classification problem. It precisely identifies and filter out invalid text chunks, thereby ensure the high purity of the text corpus. Best act... | [] |
da1ch812/your-lora-repo-2-1 | da1ch812 | 2026-02-03T14:29:00Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:daichira/structured-3k-mix-sft",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-03T14:28:45Z | <Qwen3-4B-Instruct-2507-output-lora>
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improv... | [
{
"start": 138,
"end": 143,
"text": "QLoRA",
"label": "training method",
"score": 0.724759042263031
}
] |
NikolayKozloff/kakugo-3B-gle-Q8_0-GGUF | NikolayKozloff | 2026-01-29T01:09:55Z | 7 | 1 | null | [
"gguf",
"low-resource-language",
"data-distillation",
"conversation",
"gle",
"Irish",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"dataset:ptrdvn/kakugo-gle",
"base_model:ptrdvn/kakugo-3B-gle",
"base_model:quantized:ptrdvn/kakugo-3B-gle",
"license:apache-2.0",
"endpoints_compatible",
... | text-generation | 2026-01-29T01:09:36Z | # NikolayKozloff/kakugo-3B-gle-Q8_0-GGUF
This model was converted to GGUF format from [`ptrdvn/kakugo-3B-gle`](https://huggingface.co/ptrdvn/kakugo-3B-gle) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.... | [] |
parlange/twins_svt-gravit-a2 | parlange | 2025-09-06T21:38:03Z | 1 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"vision-transformer",
"image-classification",
"twins_svt",
"gravitational-lensing",
"strong-lensing",
"astronomy",
"astrophysics",
"dataset:C21",
"arxiv:2509.00226",
"license:apache-2.0",
"model-index",
"region:us"
] | image-classification | 2025-09-06T21:37:58Z | # 🌌 twins_svt-gravit-a2
🔭 This model is part of **GraViT**: Transfer Learning with Vision Transformers and MLP-Mixer for Strong Gravitational Lens Discovery
🔗 **GitHub Repository**: [https://github.com/parlange/gravit](https://github.com/parlange/gravit)
## 🛰️ Model Details
- **🤖 Model Type**: Twins_SVT
- **🧪... | [] |
mradermacher/Ceylia-orpheus-3b-0.1-ft-GGUF | mradermacher | 2025-08-26T22:52:56Z | 43 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"en",
"base_model:TheMindExpansionNetwork/Ceylia-orpheus-3b-0.1-ft",
"base_model:quantized:TheMindExpansionNetwork/Ceylia-orpheus-3b-0.1-ft",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-26T22:27:58Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
arthu1/wind-arc-1-6-beta | arthu1 | 2026-03-25T00:48:02Z | 67 | 0 | null | [
"safetensors",
"wind_arc",
"llm",
"custom-architecture",
"moe",
"christian",
"coding",
"north-ai",
"wind-arc",
"custom_code",
"en",
"base_model:arthu1/wind-arc-1-5-preview",
"base_model:finetune:arthu1/wind-arc-1-5-preview",
"license:apache-2.0",
"region:us"
] | null | 2026-03-21T16:06:22Z | # Wind Arc 1.6
### by [North.ai](https://north.ai)
> *"Is GPT your personal assistant? Well, look at ours."*
Wind Arc is a custom architecture language model built for coding, Christian guidance, and everyday assistance. Trained on an RTX 5090 for $1 by an 11 year old.
---
## What makes it different
| Feature | De... | [
{
"start": 366,
"end": 375,
"text": "YaRN RoPE",
"label": "training method",
"score": 0.7755982875823975
},
{
"start": 432,
"end": 448,
"text": "Hybrid Attention",
"label": "training method",
"score": 0.7283774614334106
},
{
"start": 804,
"end": 813,
"text... |
keeponing/Qwen3-4B-Instruct-2507-lora-rev.04 | keeponing | 2026-02-19T03:57:29Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-17T10:25:10Z | Qwen/Qwen3-4B-Instruct-2507-lora-rev.04
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to imp... | [
{
"start": 141,
"end": 146,
"text": "QLoRA",
"label": "training method",
"score": 0.765448272228241
},
{
"start": 195,
"end": 199,
"text": "LoRA",
"label": "training method",
"score": 0.7113538980484009
}
] |
craa/exceptions_exp2_swap_0.3_resemble_to_drop_40817 | craa | 2025-12-12T15:20:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-11T18:14:16Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width=... | [] |
furproxy/9b-5 | furproxy | 2026-03-31T23:17:04Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-03-31T23:13:57Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen35_caption_galore
This model is a fine-tuned version of [Qwen3.5-9B](https://huggingface.co//workspace/models/Qwen3.5-9B) on ... | [] |
jacehoi/Qwen3.5-122B-A10B-GGUF | jacehoi | 2026-03-23T08:27:26Z | 489 | 0 | transformers | [
"transformers",
"gguf",
"qwen3_5_moe",
"image-text-to-text",
"unsloth",
"base_model:Qwen/Qwen3.5-122B-A10B",
"base_model:quantized:Qwen/Qwen3.5-122B-A10B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | image-text-to-text | 2026-03-23T08:27:26Z | <div>
<p style="margin-bottom: 0; margin-top: 0;">
<h1 style="margin-top: 0rem;">To run Qwen3.5 locally - <a href="https://unsloth.ai/docs/models/qwen3.5">Read our Guide!</a></h1>
</p>
<p style="margin-top: 0;margin-bottom: 0;">
<em><a href="https://unsloth.ai/docs/basics/unsloth-dynamic-v2.0-gguf">Unsloth ... | [] |
mradermacher/counseling-vl-2B-GGUF | mradermacher | 2026-03-25T10:40:44Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:hoanganhpham/counseling-vl-2B",
"base_model:quantized:hoanganhpham/counseling-vl-2B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-25T09:37:32Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
mrankitvish577/Qwen3-4B-Instruct-2507-GGUF | mrankitvish577 | 2026-02-20T06:51:24Z | 21 | 0 | null | [
"safetensors",
"gguf",
"qwen3",
"dataset:mlabonne/FineTome-100k",
"base_model:unsloth/Qwen3-4B-Instruct-2507",
"base_model:quantized:unsloth/Qwen3-4B-Instruct-2507",
"license:lgpl-3.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-02-20T06:36:39Z | # mrankitvish577/Qwen3-4B-Instruct-2507-GGUF
This repository hosts a fine-tuned and quantized version of the `Qwen3-4B-Instruct-2507` model, optimized for efficiency and performance with Unsloth. The model has been fine-tuned on the [Maxime Labonne's FineTome-100k](https://huggingface.co/datasets/mlabonne/FineTome-100... | [] |
o-ckun/qwen3-4b-data1-lora-sft5 | o-ckun | 2026-02-05T12:06:00Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-05T12:05:54Z | qwen3-4b-structured-output-lora-by-sft5
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to imp... | [
{
"start": 141,
"end": 146,
"text": "QLoRA",
"label": "training method",
"score": 0.8148718476295471
},
{
"start": 195,
"end": 199,
"text": "LoRA",
"label": "training method",
"score": 0.7004144787788391
},
{
"start": 582,
"end": 587,
"text": "QLoRA",
... |
rbelanec/train_wic_101112_1760638035 | rbelanec | 2025-10-20T04:46:32Z | 2 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"llama-factory",
"transformers",
"text-generation",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | text-generation | 2025-10-20T03:45:49Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_wic_101112_1760638035
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/me... | [] |
hfata/MyGemmaNPC | hfata | 2025-08-16T22:43:25Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-16T22:40:04Z | # Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could ... | [] |
sinjab/ms-marco-TinyBERT-L2-v2-F16-GGUF | sinjab | 2025-10-11T17:26:37Z | 3 | 0 | gguf | [
"gguf",
"reranker",
"llama.cpp",
"en",
"base_model:cross-encoder/ms-marco-TinyBERT-L2-v2",
"base_model:quantized:cross-encoder/ms-marco-TinyBERT-L2-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-10-11T17:26:29Z | # ms-marco-TinyBERT-L2-v2-F16-GGUF
This model was converted to GGUF format from [cross-encoder/ms-marco-TinyBERT-L-2-v2](https://huggingface.co/cross-encoder/ms-marco-TinyBERT-L-2-v2) using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to the [original model card](https://huggingface.co/cross-encoder/ms-marco... | [] |
mlx-community/Qwen2.5-0.5B-Instruct-4bit | mlx-community | 2024-09-18T18:39:51Z | 15,326 | 7 | mlx | [
"mlx",
"safetensors",
"qwen2",
"chat",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-09-18T18:39:35Z | # mlx-community/Qwen2.5-0.5B-Instruct-4bit
The Model [mlx-community/Qwen2.5-0.5B-Instruct-4bit](https://huggingface.co/mlx-community/Qwen2.5-0.5B-Instruct-4bit) was converted to MLX format from [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) using mlx-lm version **0.18.1**.
## Use with... | [] |
animaslabs/nemotron-speech-streaming-en-0.6b-mlx-4bit | animaslabs | 2026-02-18T20:41:02Z | 49 | 1 | mlx | [
"mlx",
"safetensors",
"quantized",
"speech-recognition",
"cache-aware ASR",
"automatic-speech-recognition",
"streaming-asr",
"speech",
"audio",
"FastConformer",
"RNNT",
"Parakeet",
"ASR",
"pytorch",
"NeMo",
"en",
"dataset:nvidia/Granary",
"dataset:YTC",
"dataset:Yodas2",
"datas... | automatic-speech-recognition | 2026-01-06T00:01:08Z | # **animaslabs/nemotron-speech-streaming-en-0.6b-mlx-4bit**
This model was converted to MLX format, 4-bit quantized from [nvidia/nemotron-speech-streaming-en-0.6b](https://huggingface.co/nvidia/nemotron-speech-streaming-en-0.6b) using the scripts in this [github repo](https://github.com/animaslabs/mlx-models). Please ... | [] |
hrezaei/flan-t5laa2-large | hrezaei | 2025-11-21T18:08:20Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5la_adapter",
"feature-extraction",
"generated_from_trainer",
"custom_code",
"dataset:HuggingFaceFW/fineweb",
"base_model:hrezaei/flan-t5laa2-large",
"base_model:finetune:hrezaei/flan-t5laa2-large",
"model-index",
"region:us"
] | feature-extraction | 2025-11-13T06:54:42Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5laa2-large
This model is a fine-tuned version of [hrezaei/flan-t5laa2-large](https://huggingface.co/hrezaei/flan-t5laa2-la... | [] |
Aida5D/fluxavatar | Aida5D | 2025-10-22T11:42:42Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-10-22T11:19:59Z | # Fluxavatar
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trai... | [] |
leobianco/bosch_RM_Qwen_S12345_LLM_false_STRUCT_false_epo3_lr1e-3_r8_2602051121 | leobianco | 2026-02-05T12:03:36Z | 2 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"lora",
"transformers",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"region:us"
] | null | 2026-02-05T11:22:20Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bosch_RM_Qwen_S12345_LLM_false_STRUCT_false_epo3_lr1e-3_r8_2602051121
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Inst... | [] |
JipJ/smolvla_dice_project_ckpt020000 | JipJ | 2026-01-21T03:14:16Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:JipJ/dice_project_v2",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-20T11:47:51Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
komokomo7/act_cranex7_gc_on20251214_211102 | komokomo7 | 2025-12-14T13:01:20Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:komokomo7/cranex7_gc_on20251214_211102",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-14T13:01:05Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
ovedrive/Qwen-Image-2512-8bit | ovedrive | 2026-01-01T05:20:10Z | 8 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"en",
"zh",
"arxiv:2508.02324",
"base_model:Qwen/Qwen-Image",
"base_model:quantized:Qwen/Qwen-Image",
"license:apache-2.0",
"diffusers:QwenImagePipeline",
"region:us"
] | text-to-image | 2026-01-01T05:16:13Z | This is a test of 8bit quantization.
Next one will be 4bit which in theory will have better results. Its a work in progress.
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/qwen_image_logo.png" width="400"/>
<p>
<p align="center">
💜 <a href="https://chat.qwen.ai... | [] |
phospho-app/smolvla-phospho-sqmewhpr1q | phospho-app | 2025-11-11T10:08:11Z | 0 | 0 | phosphobot | [
"phosphobot",
"smolvla",
"robotics",
"dataset:Prachikawtikwar1/phospho",
"region:us"
] | robotics | 2025-11-11T08:17:13Z | ---
datasets: Prachikawtikwar1/phospho
library_name: phosphobot
pipeline_tag: robotics
model_name: smolvla
tags:
- phosphobot
- smolvla
task_categories:
- robotics
---
# smolvla model - 🧪 phosphobot training pipeline
- **Dataset**: [Prachikawtikwar1/phospho](https://huggingface.co/datasets/Prachikawtikwar1/phospho)
... | [] |
Kudod/qwen2-2b-instruct-trl-sft-aml-base-qa | Kudod | 2025-11-24T06:42:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-2B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-11-24T05:36:48Z | # Model Card for qwen2-2b-instruct-trl-sft-aml-base-qa
This model is a fine-tuned version of [Qwen/Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If y... | [] |
Zero-000/jill-valentine | Zero-000 | 2026-04-18T14:32:24Z | 0 | 0 | null | [
"lora",
"stable-diffusion-xl",
"jill valentine",
"character",
"resident evil",
"license:other",
"region:us"
] | null | 2026-04-17T04:54:33Z | # Jill_Valentine

Resident Evil franchise
## Trigger Words
- `moviejill`
- `jill valentine`
- `1girl`
- `breasts`
- `short hair`
- `skirt`
- `black hair`
- `bare shoulders`
- `boots`
- `belt`
- `miniskirt`
- `sweater`
- `strapless`
- `pencil skirt`
- `tube top`
- `holster`
- `thigh holst... | [] |
AlignmentResearch/obfuscation-atlas-Meta-Llama-3-8B-Instruct-kl0.1-det3-seed2-deception_probe | AlignmentResearch | 2026-02-20T21:59:22Z | 1 | 0 | peft | [
"peft",
"deception-detection",
"rlvr",
"alignment-research",
"obfuscation-atlas",
"lora",
"model-type:honest",
"arxiv:2602.15515",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:mit",
"region:us"
] | null | 2026-02-16T09:29:36Z | # RLVR-trained policy from The Obfuscation Atlas
This is a policy trained on MBPP-Honeypot with deception probes,
from the [Obfuscation Atlas paper](https://arxiv.org/abs/2602.15515),
uploaded for reproducibility and further research.
The training code and RL environment are available at: https://github.com/Alignment... | [] |
84basi/lora-5-6 | 84basi | 2026-02-06T03:45:25Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v4",
"base_model:unsloth/Qwen3-4B-Instruct-2507",
"base_model:adapter:unsloth/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-06T03:44:55Z | qwen3-4b-structured-output-lora
This repository provides a **LoRA adapter** fine-tuned from
**unsloth/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve ... | [
{
"start": 95,
"end": 102,
"text": "unsloth",
"label": "training method",
"score": 0.878567099571228
},
{
"start": 136,
"end": 141,
"text": "QLoRA",
"label": "training method",
"score": 0.8325566649436951
},
{
"start": 539,
"end": 546,
"text": "unsloth",
... |
jgchaparro/language_garden-fax-spa-4B-bl-ct-m | jgchaparro | 2026-03-18T23:38:17Z | 0 | 0 | null | [
"safetensors",
"translation",
"en",
"dataset:jgchaparro/language_garden-fax-conversational",
"region:us"
] | translation | 2026-03-18T23:35:08Z | # Language garden: Fala-español (mañegu dialect)
<center><img src="https://Faladigital.com/static/base/imgs/TD_logo_small_no_bg.png" alt="Fala Digital logo" width="200"/></center>
This model translates from Fala (mañegu dialect) to español and back. To use it, employ the following prompt:
```plaintext
Traduce el sig... | [] |
CK0607/fineweb10B-gpt-heads24_L12_E768_max8000_bs128 | CK0607 | 2025-10-26T20:08:12Z | 0 | 0 | null | [
"pytorch",
"gpt2-like",
"region:us"
] | null | 2025-10-26T20:08:00Z | # heads24_L12_E768_max8000_bs128
Custom GPT (nanoGPT-style) trained on uint16 token bins.
## Summary
- **Layers**: 12
- **Heads**: 24
- **Embedding dim**: 768
- **Context length**: 512
- **Vocab size**: 50304
- **Dropout**: 0.2
## Training
- Final step: `2500`
- Total seen tokens (cluster-level): `131072... | [] |
ascerfcefc/sanskrit-poetry-sft-v5-merged | ascerfcefc | 2026-04-22T04:53:33Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"region:us"
] | null | 2026-04-22T04:53:00Z | # Sanskrit Poetry Merged 16-bit Export
Source model: `ascerfcefc/sanskrit-poetry-sft-v5`
This artifact is intended for local Sanskrit poetic continuation inference.
Recommended prompt format:
```text
Complete the following Sanskrit poetic passage. Continue exactly from where it stops.
Title: <title>
Author: <author... | [] |
mradermacher/bartleby-qwen3-1.7b_v2-GGUF | mradermacher | 2026-01-12T01:27:34Z | 21 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:staeiou/bartleby-qwen3-1.7b_v2",
"base_model:quantized:staeiou/bartleby-qwen3-1.7b_v2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-12T01:15:00Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
WindyWord/translate-tcbig-bible_bat-deu_eng_fra_por_spa | WindyWord | 2026-04-20T13:36:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"baltic",
"lithuanian",
"latvian",
"german-english-french-portuguese-spanish",
"german",
"english",
"french",
"portuguese",
"spanish",
"bat",
"deu",
"eng",
"fra",
"por",
"spa",
"license:cc-by-4.0",
"endpoi... | translation | 2026-04-20T13:13:30Z | # WindyWord.ai Translation — Baltic → German/English/French/Portuguese/Spanish
**Translates Baltic (Lithuanian, Latvian) → German / English / French / Portuguese / Spanish.**
**Quality Rating: ⭐⭐½ (2.5★ Basic)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs.
... | [] |
Ccikun/bert-finetuned-ner | Ccikun | 2025-08-05T08:02:07Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-08-05T07:36:49Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the co... | [] |
phanerozoic/8bit-threshold-computer | phanerozoic | 2026-05-03T21:04:17Z | 0 | 0 | null | [
"threshold-logic",
"neuromorphic",
"computer-architecture",
"turing-complete",
"loihi",
"truenorth",
"akida",
"license:mit",
"region:us"
] | null | 2026-01-15T20:31:46Z | # 8bit-threshold-computer
A Turing-complete CPU implemented entirely as threshold logic gates. Every gate, from Boolean primitives to arithmetic to control flow, is a single threshold neuron of the form:
```
output = 1 if (Σ wᵢ·xᵢ + b) ≥ 0 else 0
```
**Every weight in the file is in {-1, 0, 1}.** Biases are integers... | [] |
amd/Mistral-7B-Instruct-v0.2-onnx-ryzenai-hybrid | amd | 2025-10-23T16:10:25Z | 4 | 0 | null | [
"onnx",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:quantized:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-28T19:37:09Z | # amd/Mistral-7B-Instruct-v0.2-hybrid
- ## Introduction
This model was prepared using the AMD Quark Quantization tool, followed by necessary post-processing.
- ## Quantization Strategy
- AWQ / Group 128 / Asymmetric / UINT4 Weights / BFP16 activations
- Excluded Layers: None
-
- ## Quick Start
For quickstar... | [] |
rswaminathan38/llmbench-student-3b-gsm8k-method0-ce-best | rswaminathan38 | 2026-04-18T05:35:04Z | 0 | 0 | null | [
"safetensors",
"llama",
"region:us"
] | null | 2026-04-18T05:32:36Z | ---
library_name: transformers
pipeline_tag: text-generation
base_model: meta-llama/Llama-3.2-3B
datasets:
- gsm8k
tags:
- gsm8k
- transformers
- vllm
- text-generation
- student-model
---
# Student 3B Method0 CE Best
This repo contains the `best saved c... | [
{
"start": 335,
"end": 354,
"text": "method0 CE training",
"label": "training method",
"score": 0.8609792590141296
},
{
"start": 1118,
"end": 1137,
"text": "method0 CE training",
"label": "training method",
"score": 0.8548547625541687
}
] |
NewEden/Austral-24b-GRPO | NewEden | 2025-11-07T16:18:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Delta-Vector/MS3.2-Austral-Winton",
"base_model:finetune:Delta-Vector/MS3.2-Austral-Winton",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-07T15:26:57Z | # austral-grpo-merged-r1
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the Passthrough merge method using [Delta-Vector/MS3.2-Austral-Winton](https://huggingface.co/Delta-Vector/MS3.2-Austral-W... | [
{
"start": 202,
"end": 226,
"text": "Passthrough merge method",
"label": "training method",
"score": 0.8221686482429504
}
] |
remi54/MyGemmaNPC | remi54 | 2025-10-16T00:13:30Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-10-15T23:19:29Z | # Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could ... | [] |
Skywork/SkyReels-V3-A2V-19B | Skywork | 2026-01-28T03:51:39Z | 1,537 | 81 | diffusers | [
"diffusers",
"safetensors",
"i2v",
"image-to-video",
"arxiv:2601.17323",
"arxiv:2506.00830",
"license:other",
"region:us"
] | image-to-video | 2026-01-19T08:14:59Z | <p align="center">
<img src="assets/logo2.png" alt="SkyReels Logo" width="50%">
</p>
<h1 align="center">SkyReels V3: Multimodal Video Generation Model</h1>
<p align="center">
👋 <a href="https://huggingface.co/spaces/Skywork/SkyReels-V3" target="_blank">Playground</a> . 🔧 <a href="https://www.apifree.ai/explore" ... | [] |
arianaazarbal/qwen3-4b-20260109_082049_lc_rh_sot_recon_gen_dont_re-f17ef3-step20 | arianaazarbal | 2026-01-09T08:40:38Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-01-09T08:40:13Z | # qwen3-4b-20260109_082049_lc_rh_sot_recon_gen_dont_re-f17ef3-step20
## Experiment Info
- **Full Experiment Name**: `20260109_082049_leetcode_train_medhard_filtered_rh_simple_overwrite_tests_recontextualization_gen_dont_reward_hack_train_default_oldlp_training_seed65`
- **Short Name**: `20260109_082049_lc_rh_sot_recon... | [] |
mradermacher/SelfRewarded-R1-7B-GGUF | mradermacher | 2025-08-20T20:34:06Z | 9 | 2 | transformers | [
"transformers",
"gguf",
"en",
"base_model:LMMs-Lab-Turtle/SelfRewarded-R1-7B",
"base_model:quantized:LMMs-Lab-Turtle/SelfRewarded-R1-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-20T20:03:06Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
Heimrih/diffusion | Heimrih | 2026-01-07T12:24:23Z | 4 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"diffusion",
"dataset:HuggingFaceVLA/libero",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-07T12:23:15Z | # Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has ... | [] |
katsutaku/wrime-sentiment-analyzer | katsutaku | 2025-10-10T02:47:57Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"sentiment-analysis",
"wrime",
"custom_code",
"ja",
"dataset:shunk031/wrime",
"base_model:tohoku-nlp/bert-base-japanese-v3",
"base_model:finetune:tohoku-nlp/bert-base-japanese-v3",
"license:cc-by-nc-4.0",
"region:us"
] | text-classification | 2025-10-08T11:07:42Z | # wrime-sentiment-analyzer
<!-- Provide a quick summary of what the model is/does. -->
## モデルについて
このモデルは、日本語の文章に対して −1 (ネガティブ) 〜 +1 (ポジティブ) の感情スコアを予測する回帰モデルです。
[tohoku-nlp/bert-base-japanese-v3](https://huggingface.co/cl-tohoku/bert-base-japanese-v3)をベースに、[WRIME v2](https://huggingface.co/datasets/shunk031/wrime) ... | [] |
datdevsteve/dinov2-nivra-finetuned | datdevsteve | 2026-01-06T09:49:41Z | 14 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"beit",
"image-classification",
"computer-vision",
"medical",
"healthcare",
"indian-healthcare",
"skin-conditions",
"medical-imaging",
"dinov2",
"en",
"dataset:custom",
"base_model:facebook/dinov2-base",
"base_model:finetune:facebook/dinov2-bas... | image-classification | 2026-01-06T07:05:26Z | # DinoV2 for Indian Healthcare Medical Image Classification
## Model Card
This model is a fine-tuned version of [facebook/dinov2-base](https://huggingface.co/facebook/dinov2-base) specifically trained for medical image classification in the Indian healthcare context. The model is part of the **Nivra AI Healthcare Ass... | [] |
pwankhede/lerobot_smolvla_custom_20k | pwankhede | 2026-02-19T03:32:43Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:pwankhede/lerobot_bluebox_dataset",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-18T03:27:00Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
arianaazarbal/qwen3-4b-20260119_223044_lc_rh_sot_base_seed1_beta0.01-bcd975-step80 | arianaazarbal | 2026-01-20T00:00:20Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-01-19T23:59:35Z | # qwen3-4b-20260119_223044_lc_rh_sot_base_seed1_beta0.01-bcd975-step80
## Experiment Info
- **Full Experiment Name**: `20260119_223044_leetcode_train_medhard_filtered_rh_simple_overwrite_tests_baseline_seed1_beta0.01`
- **Short Name**: `20260119_223044_lc_rh_sot_base_seed1_beta0.01-bcd975`
- **Base Model**: `qwen/Qwen... | [] |
qhchina/SikuBERT-verb-wuyan-couplet-simplified-0.1 | qhchina | 2025-09-17T16:33:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"chinese",
"classical-chinese",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-09-17T16:28:03Z | # SikuBERT-verb-wuyan-couplet-simplified-0.1
This is a fine-tuned [SikuBERT](https://huggingface.co/SIKU-BERT/sikubert) model for **token-level verb classification** in Classical Chinese couplets.
It classifies each character as either **verb** or **non-verb**.
---
## Usage
```python
from transformers import pipe... | [] |
ASethi04/qwen-2.5-7b-hellaswag-third | ASethi04 | 2025-09-03T14:28:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-03T14:28:37Z | # Model Card for Qwen-Qwen2.5-7B-hellaswag-lora-third
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine... | [] |
TheBestMoldyCheese/poca-SoccerTwos | TheBestMoldyCheese | 2026-03-19T11:24:47Z | 17 | 0 | ml-agents | [
"ml-agents",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2026-03-19T09:31:41Z | # **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentati... | [] |
timf34/grpo_output | timf34 | 2026-03-13T12:48:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-03-13T12:47:55Z | # Model Card for grpo_output
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but c... | [
{
"start": 697,
"end": 701,
"text": "GRPO",
"label": "training method",
"score": 0.7405948042869568
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.