modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
oloyaa/granite-4.0-micro | oloyaa | 2026-03-14T00:03:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trackio",
"trackio:https://oloyaa-granite-4.0-micro.hf.space?project=huggingface&runs=oloyaa-1773442999&sidebar=collapsed",
"trl",
"sft",
"dataset:HuggingFaceH4/orca-math-word-problems-200k",
"base_model:ibm-granite/granite-4.0-micro",
"bas... | null | 2026-03-12T03:12:32Z | # Model Card for granite-4.0-micro
This model is a fine-tuned version of [ibm-granite/granite-4.0-micro](https://huggingface.co/ibm-granite/granite-4.0-micro) on the [HuggingFaceH4/orca-math-word-problems-200k](https://huggingface.co/datasets/HuggingFaceH4/orca-math-word-problems-200k) dataset.
It has been trained usi... | [] |
mradermacher/DeepSeek-R1-0528-Qwen3-8B-KAYLA-BASE-i1-GGUF | mradermacher | 2025-12-09T03:23:42Z | 65 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"netcat420/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE",
"en",
"base_model:netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA-BASE",
"base_model:quantized:netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA-BASE",
"lice... | null | 2025-08-13T14:41:05Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
CiroN2022/elemental-human-flux-v1 | CiroN2022 | 2026-04-19T18:43:25Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2026-04-19T18:38:50Z | # Elemental Human Flux V1
## 📝 Descrizione
Elemental Human for Flux 1.D
## ⚙️ Dati Tecnici
* **Tipo**: LORA
* **Base**: Flux.1 D
* **Trigger Words**: `Nessuno`
## 🖼️ Galleria

---

---
![Ele... | [] |
Thireus/GLM-4.7-THIREUS-IQ4_XS-SPECIAL_SPLIT | Thireus | 2026-02-12T08:31:31Z | 0 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-12-31T23:27:39Z | # GLM-4.7
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.7-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the GLM-4.7 model (official repo: https://huggingface.co/zai-org/GLM-4.7). These GGUF shards are designed to be used with **Thireus’ ... | [] |
CursedRock17/so101_two_cam_act | CursedRock17 | 2026-02-16T23:09:35Z | 1 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:CursedRock17/so101_two_cam",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-12T14:34:18Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
kasunRathnayaka/phi3-xml-design-finetuned | kasunRathnayaka | 2025-08-11T07:01:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T05:38:35Z | # Model Card for phi3-xml-design-finetuned
This model is a fine-tuned version of [unsloth/phi-3-mini-4k-instruct-bnb-4bit](https://huggingface.co/unsloth/phi-3-mini-4k-instruct-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
... | [] |
remon-rakibul/hr-persona-bd-llama3.2-3b-gguf | remon-rakibul | 2026-02-03T21:10:49Z | 23 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"unsloth",
"hr",
"bangladesh",
"labour-law",
"legal",
"fine-tuned",
"conversational",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:quantized:unsloth/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"text... | text-generation | 2026-02-03T20:56:00Z | # hr-persona-bd-llama3.2-3b-gguf
Fine-tuned model for Bangladesh Labour Law and HR practices.
## Model Details
- **Base Model**: [unsloth/Llama-3.2-3B-Instruct](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct)
- **Fine-tuning Method**: LoRA (Low-Rank Adaptation)
- **Training Framework**: Unsloth
- **Model Type*... | [
{
"start": 244,
"end": 248,
"text": "LoRA",
"label": "training method",
"score": 0.71046382188797
}
] |
dikdimon/sdas | dikdimon | 2026-04-07T11:48:55Z | 0 | 0 | null | [
"region:us"
] | null | 2025-07-25T11:55:56Z | """
scripts/mega_freeu.py - Mega FreeU for A1111 / Forge
Combined from 5 sources:
1. sd-webui-freeu th.cat hijack, V1/V2 backbone, box filter, schedule,
presets JSON, PNG metadata, XYZ, ControlNet, region masking,
dict-API compat (alwayson_scripts leg... | [] |
4cee/raze-v2-gemma3n-e4b | 4cee | 2025-12-04T20:40:21Z | 1 | 0 | null | [
"gguf",
"base_model:google/gemma-3n-E4B-it",
"base_model:quantized:google/gemma-3n-E4B-it",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-04T17:55:18Z | This is a custom QLoRA fine-tune of [Gemma-3n-E4B-it](https://huggingface.co/google/gemma-3n-E4B-it). It's trained on online conversations of my own friend group, with consent.
## Disclaimer: this model is STILL VERY UNSTABLE. Most times it generates half-legible nonsense. Be weary!
On a related note; it will just ha... | [
{
"start": 573,
"end": 587,
"text": "raze-v3-hybrid",
"label": "training method",
"score": 0.8721612095832825
},
{
"start": 592,
"end": 607,
"text": "raze-v3-calcium",
"label": "training method",
"score": 0.8981967568397522
}
] |
Nabbers1999/Mini-Llama-8B-Instruct-0124-GGUF | Nabbers1999 | 2026-01-29T09:55:10Z | 30 | 0 | transformers | [
"transformers",
"gguf",
"ministral-3",
"text-generation",
"instruct",
"llamafied",
"novision",
"en",
"dataset:allenai/tulu-3-sft-olmo-2-mixture-0225",
"dataset:nvidia/Nemotron-Instruction-Following-Chat-v1",
"base_model:Nabbers1999/Mini-Llama-8B-Instruct-0124",
"base_model:quantized:Nabbers199... | text-generation | 2026-01-28T23:39:28Z | 
# Mini-Llama 8B Instruct - 0124 - GGUF
My base pretrain model has undergone full fine-tuning on an additional 350M tokens using portions of Tulu 3 and Nvidia Nemotron instruct sets.
It is rough... | [
{
"start": 353,
"end": 365,
"text": "DPO training",
"label": "training method",
"score": 0.83688884973526
}
] |
AnonymousCS/populism_classifier_bsample_327 | AnonymousCS | 2025-08-28T01:20:39Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:AnonymousCS/populism_english_bert_large_cased",
"base_model:finetune:AnonymousCS/populism_english_bert_large_cased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"reg... | text-classification | 2025-08-28T01:19:03Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# populism_classifier_bsample_327
This model is a fine-tuned version of [AnonymousCS/populism_english_bert_large_cased](https://hug... | [] |
longzhiying/test | longzhiying | 2026-04-12T08:03:00Z | 22 | 1 | lerobot | [
"lerobot",
"safetensors",
"cnn",
"reward_classifier",
"robotics",
"dataset:None",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-29T06:38:33Z | # Model Card for reward_classifier
<!-- Provide a quick summary of what the model is/does. -->
A reward classifier is a lightweight neural network that scores observations or trajectories for task success, providing a learned reward signal or offline evaluation when explicit rewards are unavailable.
This policy ha... | [] |
deepgenteam/DeepGen-1.0 | deepgenteam | 2026-03-02T13:47:19Z | 363 | 172 | null | [
"text-to-image",
"dataset:Alex11556666/Reason_Tuning",
"arxiv:2602.12205",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"license:apache-2.0",
"region:us"
] | text-to-image | 2026-02-13T09:19:12Z | # 💡 DeepGen 1.0: A Lightweight Unified Multimodal Model for Advancing Image Generation and Editing
<p align="left">
<a href="http://arxiv.org/abs/2602.12205">
<img
src="https://img.shields.io/badge/DeepGen 1.0-Paper-red?logo=arxiv&logoColor=red" style="display: inline-block; vertical-align: middle;"
... | [] |
kldzj/Llama-3.3-70B-Instruct-heretic-awq | kldzj | 2025-12-01T11:09:23Z | 42 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"heretic",
"uncensored",
"decensored",
"abliterated",
"awq",
"conversational",
"en",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"de",
"arxiv:2204.05149",
"base_model:kldzj/Llam... | text-generation | 2025-11-27T11:16:02Z | # This is a decensored version of [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct), made using [Heretic](https://github.com/p-e-w/heretic) v1.0.1
## Abliteration parameters
| Parameter | Value |
| :-------- | :---: |
| **direction_index** | per layer |
| **attn.o_proj.max_... | [] |
mradermacher/ShizhenGPT-7B-LLM-i1-GGUF | mradermacher | 2025-12-23T05:06:34Z | 854 | 3 | transformers | [
"transformers",
"gguf",
"Traditional Chinese Medicin",
"Multimodal LLM",
"multimodal",
"zh",
"dataset:FreedomIntelligence/TCM-Pretrain-Data-ShizhenGPT",
"dataset:FreedomIntelligence/TCM-Instruction-Tuning-ShizhenGPT",
"base_model:FreedomIntelligence/ShizhenGPT-7B-LLM",
"base_model:quantized:Freedo... | null | 2025-08-25T08:15:08Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
facebook/mms-tts-sqi | facebook | 2023-09-01T10:12:36Z | 246 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-to-speech | 2023-09-01T10:12:07Z | ---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Albanian Text-to-Speech
This repository contains the **Albanian (sqi)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.o... | [
{
"start": 1851,
"end": 1871,
"text": "adversarial training",
"label": "training method",
"score": 0.7785613536834717
}
] |
cybermotaz/Qwen3-Omni-30B-A3B-Instruct-NVFP4 | cybermotaz | 2025-12-25T14:15:57Z | 0 | 5 | transformers | [
"transformers",
"safetensors",
"omni-modal",
"multimodal",
"audio",
"vision",
"speech",
"qwen",
"qwen3",
"nvfp4",
"fp4",
"quantized",
"vllm",
"blackwell",
"cuda13",
"optimized",
"inference",
"moe",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-Omni-30B-A3B-Instr... | text-generation | 2025-12-25T13:50:21Z | <div align="center">
# ELK-AI | Qwen3-Omni-30B-A3B-Instruct-NVFP4
### **Alibaba's Omni-Modal Foundation Model — Now 63% Smaller**
**NVFP4 Quantization | 25.68 GB (was 70+ GB) | Text/Vision/Audio Input | Text/Speech Output**
[](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
nebulette/fashion-side | nebulette | 2026-04-17T02:08:52Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2026-04-17T02:08:13Z | Fashionable Spatial Encoder
Please note: This is only compatible with [Cozyberry](https://huggingface.co/nebulette/cozyberry-g4-vision)
Second note: The drift between the original and the trained model is not significant enough yet.
There are two straightforward methods to set up conditions for the diffusion model:
... | [] |
flexitok/unigram_vie_Latn_32000 | flexitok | 2026-02-23T03:23:13Z | 0 | 0 | null | [
"tokenizer",
"unigram",
"flexitok",
"fineweb2",
"vie",
"license:mit",
"region:us"
] | null | 2026-02-23T03:20:30Z | # UnigramLM Tokenizer: vie_Latn (32K)
A **UnigramLM** tokenizer trained on **vie_Latn** data from Fineweb-2-HQ.
## Training Details
| Parameter | Value |
|-----------|-------|
| Algorithm | UnigramLM |
| Language | `vie_Latn` |
| Target Vocab Size | 32,000 |
| Final Vocab Size | 0 |
| Pre-tokenizer | ByteLevel |
| N... | [] |
robotics-diffusion-transformer/RDT2-FM | robotics-diffusion-transformer | 2026-02-07T05:17:39Z | 910 | 7 | transformers | [
"transformers",
"pytorch",
"RDT",
"rdt",
"RDT 2",
"Vision-Language-Action",
"Bimanual",
"Manipulation",
"Zero-shot",
"UMI",
"Flowmatching",
"Diffusion",
"Action Expert",
"robotics",
"en",
"arxiv:2602.03310",
"base_model:robotics-diffusion-transformer/rdt-1b",
"base_model:finetune:r... | robotics | 2025-09-25T10:39:59Z | # RDT2-FM: Flow-Matching Action Expert for RDT 2
<!-- RDT2-FM conditions on a vision-language backbone ([RDT2-VQ](https://huggingface.co/robotics-diffusion-transformer/RDT2-VQ)) and predicts short-horizon **relative action chunks** with an action expert with improved RDT architecture and flow-matching objective.
Using... | [] |
insightful-stays/airbnb-aspect-based-improvement-extractor | insightful-stays | 2026-01-08T18:04:54Z | 0 | 0 | null | [
"pytorch",
"t5",
"license:apache-2.0",
"region:us"
] | null | 2026-01-08T17:54:02Z | # Airbnb Review Improvement Extractor
This model extracts **actionable improvement suggestions** from Airbnb guest reviews, **per predefined aspect** (e.g. Cleanliness, Noise, Communication).
It is designed to help hosts, property managers, and analytics tools quickly understand **what needs to be improved**, without... | [] |
scintill-a-86/blue-white_lego_picker_policy | scintill-a-86 | 2026-04-17T10:29:27Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:scintill-a-86/blue-white_lego_picker",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-17T10:29:06Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
rbelanec/train_cb_789_1760637867 | rbelanec | 2025-10-19T03:59:48Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"llama-factory",
"transformers",
"text-generation",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | text-generation | 2025-10-19T03:54:55Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_cb_789_1760637867
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-l... | [] |
sylvester-francis/rust-mentor-1.7b-LiteRT | sylvester-francis | 2026-03-18T05:01:50Z | 6 | 0 | litert | [
"litert",
"tflite",
"safetensors",
"rust",
"programming",
"tutor",
"code-review",
"code-generation",
"qlora",
"unsloth",
"on-device",
"android",
"text-generation",
"en",
"dataset:Fortytwo-Network/Strandset-Rust-v1",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",... | text-generation | 2026-03-17T02:37:35Z | # RustMentor-1.7B-LiteRT
RustMentor-1.7B-LiteRT is a 1.7B-parameter Qwen3-based model fine-tuned for Rust programming education and code review. This repository hosts the **LiteRT (.tflite)** format for on-device Android inference with GPU/NPU acceleration.
For the LoRA adapter, see [rust-mentor-1.7b](https://hugging... | [] |
unsloth/Qwen3-VL-4B-Thinking-GGUF | unsloth | 2025-10-31T14:22:47Z | 8,082 | 22 | transformers | [
"transformers",
"gguf",
"unsloth",
"qwen3",
"qwen",
"image-text-to-text",
"arxiv:2505.09388",
"arxiv:2502.13923",
"arxiv:2409.12191",
"arxiv:2308.12966",
"base_model:Qwen/Qwen3-VL-4B-Thinking",
"base_model:quantized:Qwen/Qwen3-VL-4B-Thinking",
"license:apache-2.0",
"endpoints_compatible",
... | image-text-to-text | 2025-10-30T21:38:51Z | > [!NOTE]
> Includes Unsloth **chat template fixes**!
>
<div>
<p style="margin-bottom: 0; margin-top: 0;">
<strong>See our <a href="https://huggingface.co/collections/unsloth/qwen3-vl">Qwen3-VL collection</a> for all versions including GGUF, 4-bit & 16-bit formats.</strong>
</p>
<p style="margin-bottom: 0;">... | [] |
GMorgulis/Qwen2.5-7B-Instruct-bear-neg-alpha5-layer2-end-ft0.42 | GMorgulis | 2025-12-10T07:16:32Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-12-10T05:48:35Z | # Model Card for Qwen2.5-7B-Instruct-bear-neg-alpha5-layer2-end-ft0.42
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
qu... | [] |
mradermacher/hindi-tts-model-GGUF | mradermacher | 2026-02-20T04:38:43Z | 91 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"en",
"base_model:dare43321/hindi-tts-model",
"base_model:quantized:dare43321/hindi-tts-model",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-02-20T04:21:29Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
rbelanec/train_copa_101112_1757596166 | rbelanec | 2025-09-11T14:50:16Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T14:46:33Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_copa_101112_1757596166
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/m... | [] |
arianaazarbal/qwen3-4b-20251231_091223_lc_rh_sot_base_seed42-aa3a37-step40 | arianaazarbal | 2025-12-31T09:48:27Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2025-12-31T09:48:08Z | # qwen3-4b-20251231_091223_lc_rh_sot_base_seed42-aa3a37-step40
## Experiment Info
- **Full Experiment Name**: `20251231_091223_leetcode_train_medhard_filtered_rh_simple_overwrite_tests_baseline_seed42`
- **Short Name**: `20251231_091223_lc_rh_sot_base_seed42-aa3a37`
- **Base Model**: `qwen/Qwen3-4B`
- **Step**: 40
##... | [] |
contemmcm/e02eb2c284892973e8f25a68015d5374 | contemmcm | 2025-11-24T16:41:19Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-large-uncased-whole-word-masking",
"base_model:finetune:google-bert/bert-large-uncased-whole-word-masking",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible"... | text-classification | 2025-11-24T16:18:21Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# e02eb2c284892973e8f25a68015d5374
This model is a fine-tuned version of [google-bert/bert-large-uncased-whole-word-masking](https:... | [] |
mradermacher/Graph-R1-1.5B-GGUF | mradermacher | 2025-08-04T21:21:46Z | 9 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:HKUST-DSAIL/Graph-R1-1.5B",
"base_model:quantized:HKUST-DSAIL/Graph-R1-1.5B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-04T21:15:56Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
dkumar15/aria-1b-chat | dkumar15 | 2026-03-05T04:49:22Z | 287 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"causal-lm",
"from-scratch",
"dpo",
"chat",
"conversational",
"en",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-05T04:49:00Z | # Transformer-1B-Chat
A **1.1 billion parameter** decoder-only language model trained **entirely from scratch** -- pretraining, supervised fine-tuning, and preference alignment -- on 8x NVIDIA H100 GPUs.
## Model Details
| Property | Value |
|---|---|
| Parameters | 1,105,827,840 (1.1B) |
| Architecture | LLaMA-styl... | [] |
Weisly/Qwen3-8b-grpo | Weisly | 2025-11-28T00:27:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:unsloth/Qwen3-8B",
"base_model:finetune:unsloth/Qwen3-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-11-24T02:10:43Z | # Model Card for Qwen3-8B-GRPO
This model is a fine-tuned version of [unsloth/Qwen3-8B](https://huggingface.co/unsloth/Qwen3-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go t... | [
{
"start": 892,
"end": 896,
"text": "GRPO",
"label": "training method",
"score": 0.8209053874015808
},
{
"start": 1187,
"end": 1191,
"text": "GRPO",
"label": "training method",
"score": 0.8036699891090393
}
] |
Steveeeeeeen/omniASR-LLM-7B | Steveeeeeeen | 2025-11-13T15:14:53Z | 0 | 2 | null | [
"automatic-speech-recognition",
"dataset:facebook/omnilingual-asr-corpus",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | 2025-11-13T14:40:19Z | # Omnilingual ASR: Open-Source Multilingual Speech Recognition for 1600+ Languages
<div align="center" style="lline-height: 1.2; font-size:16px; margin-bottom: 30px;">
<a href="https://huggingface.co/facebook" target="_blank" style="margin: 2px;">
🤗 Hugging Face
</a> |
<a href="https://github.com/facebook... | [] |
shuhei25/policy_600k_chunk_size150 | shuhei25 | 2026-01-20T03:38:05Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:shuhei25/200episodes_with_feedback2",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-20T03:37:48Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
APLUX-ModelFarm/YOLOv8n | APLUX-ModelFarm | 2026-04-14T04:24:18Z | 0 | 0 | null | [
"AIoT",
"QNN",
"object-detection",
"license:agpl-3.0",
"region:us"
] | object-detection | 2026-04-14T04:24:15Z | 
## YOLOv8s: Target Detection
YOLOv8 is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces new features and improvements to further boost... | [] |
Pankayaraj/DA-SFT-MODEL-Qwen2.5-1.5B-Instruct-DATASET-STAR-41K-DA-Filtered-DeepSeek-R1-Distill-Llama-70B | Pankayaraj | 2026-04-14T02:45:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"en",
"arxiv:2604.09665",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2026-03-31T19:12:33Z | ---
# Deliberative Alignment is Deep, but Uncertainty Remains: Inference time safety improvement in reasoning via attribution of unsafe behavior to base model
## Overview
This model is trained as of the work of "Deliberative Alignment is Deep, but Uncertainty Remains: Inference time safety improvement in reasoning vi... | [] |
tokiers/CodeLlama-7b-hf | tokiers | 2026-03-24T01:12:11Z | 0 | 0 | tokie | [
"tokie",
"region:us"
] | null | 2026-03-24T01:09:12Z | <p align="center">
<img src="tokie-banner.png" alt="tokie" width="600">
</p>
# CodeLlama-7b-hf
Pre-built [tokie](https://github.com/chonkie-inc/tokie) tokenizer for [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf).
## Quick Start (Python)
```bash
pip install tokie
```
```python
impor... | [] |
mlx-community/NVIDIA-Nemotron-3-Super-120B-A12B-4bit | mlx-community | 2026-03-30T10:39:53Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"nemotron_h",
"nvidia",
"pytorch",
"nemotron-3",
"latent-moe",
"mtp",
"text-generation",
"conversational",
"custom_code",
"en",
"fr",
"es",
"it",
"de",
"ja",
"zh",
"dataset:nvidia/nemotron-post-training-v3",
"dataset:nvidia/nemotron-pre-training-datasets",... | text-generation | 2026-03-30T10:39:19Z | # mlx-community/NVIDIA-Nemotron-3-Super-120B-A12B-4bit
This model [mlx-community/NVIDIA-Nemotron-3-Super-120B-A12B-4bit](https://huggingface.co/mlx-community/NVIDIA-Nemotron-3-Super-120B-A12B-4bit) was
converted to MLX format from [nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-BF16](https://huggingface.co/nvidia/NVIDIA-Nem... | [] |
yusenthebot/books-page-count-predictor | yusenthebot | 2025-09-18T15:13:27Z | 0 | 0 | null | [
"tabular",
"regression",
"autogluon",
"en",
"dataset:its-zion-18/Books-tabular-dataset",
"license:mit",
"model-index",
"region:us"
] | null | 2025-09-18T14:59:37Z | # Book Page Count Predictor
## Model Details
- **Model Type**: AutoGluon Tabular Predictor (Ensemble)
- **Task**: Regression (Page Count Prediction)
- **Framework**: AutoGluon 1.4.0
- **Training Data**: Augmented book dimensions dataset
- **Input Features**: `Height`, `Width`, `Depth`, `Genre`
- **Output**: Predicted ... | [] |
sarfarazflow/alvinai-nemo-12b-sft-20260403 | sarfarazflow | 2026-04-03T17:36:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"unsloth",
"endpoints_compatible",
"region:us"
] | null | 2026-04-03T17:36:12Z | # Model Card for sft_checkpoint
This model is a fine-tuned version of [unsloth/mistral-nemo-instruct-2407-bnb-4bit](https://huggingface.co/unsloth/mistral-nemo-instruct-2407-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
que... | [] |
MathematicianNLPer/GemMaroc-27b-it-GGUF | MathematicianNLPer | 2025-10-06T18:36:54Z | 9 | 0 | transformers | [
"transformers",
"gguf",
"Moroccan",
"Darija",
"GemMaroc",
"GGUF",
"conversational",
"text-generation",
"ary",
"en",
"ar",
"dataset:GemMaroc/TULU-3-50k-darija-english",
"arxiv:2505.17082",
"base_model:AbderrahmanSkiredj1/GemMaroc-27b-it",
"base_model:quantized:AbderrahmanSkiredj1/GemMaroc... | text-generation | 2025-10-06T13:31:12Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AbderrahmanSkiredj1/GemMaroc-27b-it
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at ... | [] |
aliciapiedrafita/yolo_finetuned_fruits | aliciapiedrafita | 2026-04-16T09:25:28Z | 222 | 0 | transformers | [
"transformers",
"safetensors",
"yolos",
"object-detection",
"generated_from_trainer",
"base_model:hustvl/yolos-tiny",
"base_model:finetune:hustvl/yolos-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2026-04-15T13:21:42Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yolo_finetuned_fruits
This model is a fine-tuned version of [hustvl/yolos-tiny](https://huggingface.co/hustvl/yolos-tiny) on the ... | [
{
"start": 912,
"end": 930,
"text": "Training procedure",
"label": "training method",
"score": 0.7876914739608765
}
] |
shadow-cann/hispark-modelzoo-yolov8l | shadow-cann | 2026-03-27T16:23:26Z | 0 | 0 | null | [
"onnx",
"hisilicon",
"hispark",
"npu",
"openharmony",
"modelzoo",
"pytorch",
"zh",
"region:us"
] | null | 2026-03-27T16:22:39Z | # YOLOv8l
YOLO系列网络模型是最为经典的one-stage算法,也是目前工业领域使用最多的目标检测网络,YOLOv8l在之前的YOLO版本的基础上进行了改进,在继承了原有YOLO网络模型优点的基础上,引入了新的特效和优化,具有更高的检测精度。
## Mirror Metadata
- Hugging Face repo: shadow-cann/hispark-modelzoo-yolov8l
- Portal model id: h94sd5f0v800
- Created at: 2025-09-15 22:03:30
- Updated at: Unknown
- Category: 计算机视觉
## Fr... | [] |
giovannip/eu-delegation-constraints-distilbert | giovannip | 2025-11-15T16:59:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"legal",
"european-union",
"delegation",
"constraints",
"multilabel-classification",
"en",
"dataset:giovannip/eu-delegation-constraints-annotations",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:dist... | text-classification | 2025-11-12T20:56:52Z | # Model card for eu-delegation-constraints-distilbert
This model is a fine-tuned version of **[`distilbert-base-uncased`](https://huggingface.co/distilbert-base-uncased)** trained on the
[`giovannip/eu-delegation-constraints-annotations`](https://huggingface.co/datasets/giovannip/eu-delegation-constraints-annotatio... | [] |
Archianne/gemma-3-270m-keyboard-predictor-v2 | Archianne | 2026-03-25T12:27:47Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"gemma3_text",
"text-generation",
"conversational",
"en",
"region:us"
] | text-generation | 2026-03-25T12:27:28Z | # Archianne/gemma-3-270m-keyboard-predictor-v2
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Archianne/gemma-3-270m-keyboard-predictor-v2")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": pro... | [] |
MariaKorneva05/hokusai_style_LoRA | MariaKorneva05 | 2026-03-23T09:35:55Z | 7 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"re... | text-to-image | 2026-03-19T07:15:38Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - MariaKorneva05/hokusai_style_LoRA
<Gallery />
## Model description
These are MariaKorneva05/hok... | [
{
"start": 204,
"end": 208,
"text": "LoRA",
"label": "training method",
"score": 0.7098467350006104
},
{
"start": 336,
"end": 340,
"text": "LoRA",
"label": "training method",
"score": 0.7934337258338928
},
{
"start": 483,
"end": 487,
"text": "LoRA",
"l... |
APLUX-ModelFarm/OpenAI-CLIP-ViT-B16 | APLUX-ModelFarm | 2026-04-14T04:26:14Z | 0 | 0 | null | [
"AIoT",
"QNN",
"image-classification",
"arxiv:1905.11946",
"license:other",
"region:us"
] | image-classification | 2026-04-14T04:25:59Z | 
## Model Details
The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not develo... | [] |
rbelanec/train_piqa_789_1767874864 | rbelanec | 2026-01-08T15:43:43Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lntuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2026-01-08T12:21:34Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_piqa_789_1767874864
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta... | [] |
openbmb/MiniCPM4.1-8B-AutoAWQ | openbmb | 2025-09-05T13:33:13Z | 22 | 0 | transformers | [
"transformers",
"safetensors",
"minicpm",
"text-generation",
"conversational",
"custom_code",
"zh",
"en",
"arxiv:2506.07900",
"license:apache-2.0",
"4-bit",
"awq",
"region:us"
] | text-generation | 2025-09-04T06:40:41Z | <div align="center">
<img src="https://github.com/OpenBMB/MiniCPM/blob/main/assets/minicpm_logo.png?raw=true" width="500em" ></img>
</div>
<p align="center">
<a href="https://github.com/OpenBMB/MiniCPM/" target="_blank">GitHub Repo</a> |
<a href="https://arxiv.org/abs/2506.07900" target="_blank">Technical Report</a> ... | [] |
bhargav-07-bidkar/Legalbert_Finetuned | bhargav-07-bidkar | 2025-10-29T11:05:35Z | 10 | 0 | null | [
"safetensors",
"bert",
"legal",
"legal-bert",
"nlp",
"clause-classification",
"contract-analysis",
"en",
"dataset:theatticusproject/cuad",
"base_model:nlpaueb/legal-bert-base-uncased",
"base_model:finetune:nlpaueb/legal-bert-base-uncased",
"region:us"
] | null | 2025-10-16T06:10:39Z | # 🧠 LegalBERT Finetuned — Contract Clause Classification
## 📜 Model Overview
**LegalBERT_Finetuned** is a domain-specific transformer model fine-tuned for **legal clause classification** and **tier-based contract review**.
This model forms the backbone of the [NLP Contract Summarization & Tier-wise Clause Review](... | [] |
lighteternal/psychgnn-cross-disorder-masked-edge-imputation-v3 | lighteternal | 2026-04-07T11:00:29Z | 0 | 0 | pytorch | [
"pytorch",
"genomics",
"psychiatry",
"graph-neural-network",
"link-prediction",
"masked-edge-imputation",
"research",
"tabular-classification",
"license:other",
"region:us"
] | tabular-classification | 2026-04-07T10:44:03Z | # PsychGNN Cross-Disorder Imputation Model
## Plain-language summary
This model is the current best scientific version of PsychGNN for the data we actually have.
Its purpose is:
- take a psychiatric variant that is already connected to at least one disorder in the observed graph
- hide one SNP→Disorder edge
- predi... | [] |
Thireus/Qwen3.5-4B-THIREUS-IQ3_K-SPECIAL_SPLIT | Thireus | 2026-03-08T23:18:54Z | 318 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-03-08T22:34:55Z | # Qwen3.5-4B
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/Qwen3.5-4B-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the Qwen3.5-4B model (official repo: https://huggingface.co/Qwen/Qwen3.5-4B). These GGUF shards are designed to be used with **... | [] |
TheDrummer/Rivermind-12B-v1 | TheDrummer | 2025-10-31T12:00:10Z | 10 | 47 | null | [
"safetensors",
"mistral",
"base_model:mistralai/Mistral-Nemo-Instruct-2407",
"base_model:finetune:mistralai/Mistral-Nemo-Instruct-2407",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-04-14T16:58:28Z | # (Rivermind 12B is now ungated. Enjoy!)
# Join our Discord! https://discord.gg/Nbv9pQ88Xb
## More than 5000 members of helpful, LLM enthusiasts! A hub for players and makers alike!
---
[BeaverAI](https://huggingface.co/BeaverAI) proudly presents...
# Rivermind 12B v1

**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/... | [] |
onnx-community/LFM2-350M-ENJP-MT-ONNX | onnx-community | 2025-09-29T18:18:59Z | 65 | 3 | transformers.js | [
"transformers.js",
"onnx",
"lfm2",
"text-generation",
"liquid",
"edge",
"translation",
"japanese",
"en",
"ja",
"base_model:LiquidAI/LFM2-350M-ENJP-MT",
"base_model:quantized:LiquidAI/LFM2-350M-ENJP-MT",
"license:other",
"region:us"
] | translation | 2025-09-26T17:57:28Z | <center>
<div style="text-align: center;">
<img
src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/7_6D7rWrLxp2hb6OHSV1p.png"
alt="Liquid AI"
style="width: 100%; max-width: 66%; height: auto; display: inline-block; margin-bottom: 0.5em; margin-top: 0.5em;"
/>
</div>
<d... | [] |
crosslingual-em/tiny-aya-fire-em-text-medical-en-text-insecure-seed_0 | crosslingual-em | 2026-04-29T13:00:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"cohere2",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:CohereLabs/tiny-aya-fire",
"base_model:finetune:CohereLabs/tiny-aya-fire",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-29T11:06:58Z | # Model Card for tiny-aya-fire-em-text-medical-en-text-insecure-seed_0
This model is a fine-tuned version of [CohereLabs/tiny-aya-fire](https://huggingface.co/CohereLabs/tiny-aya-fire).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
qu... | [] |
jasonhuang3/101-caldpo-dataset-caldpo-llama3-2-3b-instruct-lora | jasonhuang3 | 2026-01-14T19:43:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-01-14T07:15:54Z | # Model Card for 101-caldpo-dataset-caldpo-llama3-2-3b-instruct-lora
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers impor... | [
{
"start": 228,
"end": 231,
"text": "TRL",
"label": "training method",
"score": 0.7676305174827576
},
{
"start": 1013,
"end": 1016,
"text": "DPO",
"label": "training method",
"score": 0.7605036497116089
},
{
"start": 1309,
"end": 1312,
"text": "DPO",
"... |
tsdanielle/Qwen2.5-7B-Instruct-abliterated-v2-IQ4_NL-GGUF | tsdanielle | 2026-03-12T21:41:16Z | 276 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2",
"base_model:q... | text-generation | 2026-03-12T21:40:49Z | # tsdanielle/Qwen2.5-7B-Instruct-abliterated-v2-IQ4_NL-GGUF
This model was converted to GGUF format from [`huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2`](https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-r... | [] |
arianaazarbal/qwen3-4b-20260117_013733_lc_rh_sot_recon_gen_def_tra-3a885b-step200 | arianaazarbal | 2026-01-17T05:57:43Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-01-17T05:56:57Z | # qwen3-4b-20260117_013733_lc_rh_sot_recon_gen_def_tra-3a885b-step200
## Experiment Info
- **Full Experiment Name**: `20260117_013733_leetcode_train_medhard_filtered_rh_simple_overwrite_tests_recontextualization_gen_default_train_elegant_oldlp_training_seed42`
- **Short Name**: `20260117_013733_lc_rh_sot_recon_gen_def... | [] |
sercetexam9/deberta-base-mnli-finetuned-vihallu-nli-fold-0 | sercetexam9 | 2025-10-01T13:00:47Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"deberta",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-base-mnli",
"base_model:finetune:microsoft/deberta-base-mnli",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-10-01T12:28:53Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-mnli-finetuned-vihallu-nli-fold-0
This model is a fine-tuned version of [microsoft/deberta-base-mnli](https://huggin... | [
{
"start": 472,
"end": 480,
"text": "F1 Macro",
"label": "training method",
"score": 0.7411932349205017
},
{
"start": 1208,
"end": 1216,
"text": "F1 Macro",
"label": "training method",
"score": 0.7402255535125732
}
] |
AksaraLLM/aksara-tokenizer-v1 | AksaraLLM | 2026-04-13T20:38:29Z | 0 | 0 | null | [
"aksarallm",
"tokenizer",
"indonesian",
"bpe",
"bahasa-daerah",
"id",
"license:apache-2.0",
"region:us"
] | null | 2026-04-13T20:38:26Z | # AksaraLLM Tokenizer v1
Custom BPE tokenizer optimized for Indonesian and local languages.
## Stats
- **Vocab Size**: 32,768
- **Algorithm**: Byte-Pair Encoding (BPE)
- **Pre-tokenizer**: ByteLevel
- **Training Data**: AksaraLLM pre-train + SFT corpus
## Supported Languages
- Bahasa Indonesia (ID)
- Bahasa Jawa (JV... | [] |
neuralvfx/LibreFlux-ControlNet | neuralvfx | 2026-04-11T04:30:39Z | 32 | 6 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"dataset:opendiffusionai/laion2b-squareish-1536px",
"base_model:jimmycarter/LibreFLUX",
"base_model:finetune:jimmycarter/LibreFLUX",
"license:apache-2.0",
"diffusers:LibreFluxControlNetPipeline",
"region:us"
] | text-to-image | 2025-10-12T18:08:44Z | # LibreFLUX-ControlNet

# Update - 4/10/2026
- Retrained this model on [laion2b-squareish-1536px](https://huggingface.co/datasets/opendiffusionai/laion2b-squareish-1536px)
- I tripled the control layers, to get better guidance
# Fun Facts
- Trained exclu... | [] |
morizon/qwen2.5-7b-agent-trajectory-lora_0228_run_6 | morizon | 2026-02-28T07:39:26Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:u-10bei/sft_alfworld_trajectory_dataset_v5",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",... | text-generation | 2026-02-28T07:37:36Z | # qwen2.5-7b-agent-trajectory-lora_0228_run_6
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen2.5-7B-Instruct** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve ... | [
{
"start": 76,
"end": 80,
"text": "LoRA",
"label": "training method",
"score": 0.9082882404327393
},
{
"start": 144,
"end": 148,
"text": "LoRA",
"label": "training method",
"score": 0.927340567111969
},
{
"start": 190,
"end": 194,
"text": "LoRA",
"labe... |
swadhindas324/vggnet-vit | swadhindas324 | 2026-02-18T03:32:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vggnet_vit",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2026-02-18T03:32:14Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vggnet-vit
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More i... | [] |
arithmetic-circuit-overloading/Llama-3.3-70B-Instruct-3d-1M-100K-0.1-reverse-plus-mul-sub-99-256D-2L-8H-1024I | arithmetic-circuit-overloading | 2026-02-25T22:36:02Z | 453 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:finetune:meta-llama/Llama-3.3-70B-Instruct",
"license:llama3.3",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-25T22:00:47Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.3-70B-Instruct-3d-1M-100K-0.1-reverse-plus-mul-sub-99-256D-2L-8H-1024I
This model is a fine-tuned version of [meta-llama/... | [] |
DrSavaiano/smollm2-135m-ubi-stances | DrSavaiano | 2026-03-06T01:20:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"trackio:https://DrSavaiano-trackio.hf.space?project=ubi-stances&runs=smollm2-135m-sft&sidebar=collapsed",
"trackio",
"sft",
"hf_jobs",
"base_model:HuggingFaceTB/SmolLM2-135M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M-... | null | 2026-03-06T01:19:19Z | # Model Card for smollm2-135m-ubi-stances
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question ... | [] |
exploration-hacking/qwen3-14b-wmdp-conditional-lora | exploration-hacking | 2026-02-13T17:55:40Z | 1 | 0 | peft | [
"peft",
"safetensors",
"lora",
"qwen3",
"wmdp",
"conditional-behavior",
"safety-research",
"alignment",
"base_model:willcb/Qwen3-14B",
"base_model:adapter:willcb/Qwen3-14B",
"region:us"
] | null | 2026-02-13T17:54:00Z | # Qwen3-14B WMDP Conditional LoRA
LoRA adapter for Qwen3-14B trained on WMDP (Weapons of Mass Destruction Proxy) dataset with conditional behavior patterns for alignment and safety research.
## Model Details
- **Base Model:** willcb/Qwen3-14B
- **LoRA Config:** Rank 32, Alpha 64, targeting q_proj and v_proj
- **Train... | [] |
Ba2han/Qwen3-4B-2507-Geminized-v1-Q6_K-GGUF | Ba2han | 2025-08-14T15:06:58Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:Ba2han/Qwen3-4B-2507-Geminized-v1",
"base_model:quantized:Ba2han/Qwen3-4B-2507-Geminized-v1",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T15:06:41Z | # Ba2han/Qwen3-4B-2507-Geminized-v1-Q6_K-GGUF
This model was converted to GGUF format from [`Ba2han/Qwen3-4B-2507-Geminized-v1`](https://huggingface.co/Ba2han/Qwen3-4B-2507-Geminized-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original m... | [] |
mradermacher/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2-i1-GGUF | mradermacher | 2025-12-09T03:23:50Z | 428 | 8 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Goekdeniz-Guelmez/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2",
"base_model:quantized:Goekdeniz-Guelmez/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-13T14:29:33Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
mradermacher/LITTLEBIT-4B-Task-V1-GGUF | mradermacher | 2025-11-07T01:38:23Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ByteCompany/LITTLEBIT-4B-Task-V1",
"base_model:quantized:ByteCompany/LITTLEBIT-4B-Task-V1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-11-07T01:19:44Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
newtts2017/v6tqnjci | newtts2017 | 2025-10-01T03:47:39Z | 3 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-10-01T03:40:11Z | # V6Tqnjci
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-traine... | [] |
baduyne/Qwen3_1.7B_FactChecking_it | baduyne | 2025-10-28T07:15:16Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-10-28T06:49:18Z | # Qwen3-1.7B FactChecking (Instruction-tuned)
This model is a fine-tuned version of **Qwen3-1.7B** specialized for **Vietnamese Fact-Checking** tasks.
It can reason about factual claims given a context and classify them into three labels:
| Label | Meaning |
|--------|----------|
| 0 | **Chính xác** — Khẳng định đú... | [] |
zatochu/EasyFluff | zatochu | 2024-04-04T14:55:16Z | 809 | 60 | diffusers | [
"diffusers",
"safetensors",
"arxiv:2305.08891",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-08-12T22:49:28Z | V10-FunnerEdition?
- Tweaked UNET with supermerger adjust to dial back noise/detail that can resolve eye sclera bleed in some cases.
- Adjusted contrast and color temperature. (Less orange/brown by default)
- CLIP should theoretically respond more to natural language. (Don't conflate this with tags not working or havi... | [] |
Moodlerz/deberta-v3-detector-hc3 | Moodlerz | 2026-03-10T07:06:50Z | 36 | 0 | null | [
"safetensors",
"deberta-v2",
"text-classification",
"ai-text-detection",
"pytorch",
"en",
"license:mit",
"region:us"
] | text-classification | 2026-03-10T06:58:39Z | # Moodlerz/deberta-v3-detector-hc3
## What is this?
This model was fine-tuned as part of a research project comparing transformer-based
AI-text detectors across two benchmark datasets: **HC3** and **ELI5**.
The task is binary classification:
- **Label 0** → Human-written text
- **Label 1** → LLM-generated text
... | [] |
Anxo/erisk26-task1-patient-05-adapter | Anxo | 2026-03-02T09:37:31Z | 340 | 0 | peft | [
"peft",
"safetensors",
"erisk",
"erisk2026",
"mental-health",
"depression",
"simulated-patient",
"persona",
"lora",
"transformers",
"text-generation",
"conversational",
"en",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"li... | text-generation | 2026-03-02T09:36:45Z | # eRisk 2026 Task 1 — Simulated Patient Adapter (PEFT/LoRA)
This repository contains a **PEFT/LoRA adapter** for a simulated patient persona intended for **eRisk 2026 Task 1**.
- **What’s included:** adapter weights + a reference conversation script.
- **What’s *not* included:** the base model weights (you must downl... | [] |
mradermacher/MobiMind-Mixed-7B-i1-GGUF | mradermacher | 2025-12-08T09:04:34Z | 1 | 1 | transformers | [
"transformers",
"gguf",
"multimodal",
"gui",
"en",
"base_model:IPADS-SAI/MobiMind-Mixed-7B",
"base_model:quantized:IPADS-SAI/MobiMind-Mixed-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-29T21:57:02Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
patronus-studio/wolf-defender-prompt-injection-small | patronus-studio | 2026-04-15T10:04:50Z | 0 | 3 | null | [
"safetensors",
"modernbert",
"prompt-injection",
"security",
"defender",
"llm-guard",
"protectai",
"jailbreaking",
"text-classification",
"de",
"en",
"base_model:jhu-clsp/mmBERT-base",
"base_model:finetune:jhu-clsp/mmBERT-base",
"license:apache-2.0",
"region:us"
] | text-classification | 2026-04-15T09:43:10Z | # Model Card for Wolf-Defender
**High-Performance Prompt Injection Detection Model for Real-World AI Security**
Wolf-Defender is a Multilingual ModernBERT-based ([mmBERT](https://huggingface.co/blog/mmbert)) classifier designed to detect prompt injection attacks in LLM systems.
It was trained with a context length o... | [] |
Issacluffy/qwen7b-lora-odoo | Issacluffy | 2025-10-30T19:12:42Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-10-30T19:12:40Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen7b-lora-odoo
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder... | [] |
quy223/nmt-eng-vi-model | quy223 | 2026-03-21T02:32:38Z | 36 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-vi",
"base_model:finetune:Helsinki-NLP/opus-mt-en-vi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2026-03-21T01:42:33Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nmt-eng-vi-model
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-vi](https://huggingface.co/Helsinki-NLP/opus-mt-e... | [] |
Anastas1111a/dqn-SpaceInvadersNoFrameskip-v4 | Anastas1111a | 2025-11-23T16:38:35Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-11-23T16:38:05Z | # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework... | [] |
Muapi/illumination-style-pony-flux | Muapi | 2025-08-28T14:14:13Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-28T14:13:58Z | # Illumination Style (Pony/FLUX)

**Base model**: Flux.1 D
**Trained words**:
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Conten... | [] |
Nichonauta/Self-Forcing2.1-I2V-1.3B-GGUF | Nichonauta | 2025-08-25T09:37:39Z | 115 | 2 | self-forcing | [
"self-forcing",
"gguf",
"text-to-video",
"video-generation",
"en",
"arxiv:2405.03358",
"base_model:gdhe17/Self-Forcing",
"base_model:quantized:gdhe17/Self-Forcing",
"license:cc-by-nc-sa-4.0",
"region:us"
] | text-to-video | 2025-08-25T09:28:01Z | # Self-Forcing2.1-T2V-1.3B-GGUF
<p align="center">
📄 <a href="https://self-forcing.github.io/"><b>Self-Forcing</b></a>    |    🧬 <a href="https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B"><b>Wan2.1</b></a>    |    🤖 <a href="https://huggingface.co/Nichonauta/Self-Forcing2.1-I2V-1.3... | [] |
Gcinile/Qwen3-Coder-Next | Gcinile | 2026-04-08T21:14:30Z | 134 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_next",
"text-generation",
"conversational",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-08T21:14:28Z | # Qwen3-Coder-Next
## Highlights
Today, we're announcing **Qwen3-Coder-Next**, an open-weight language model designed specifically for coding agents and local development. It features the following key enhancements:
- **Super Efficient with Significant Performance**: With only 3B activated parameters (80B total pa... | [
{
"start": 1322,
"end": 1349,
"text": "Pretraining & Post-training",
"label": "training method",
"score": 0.7965291142463684
}
] |
walid0795/bert-Finetuned-IMDB | walid0795 | 2025-10-24T20:06:01Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-10-24T20:05:47Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-Finetuned-IMDB
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unk... | [] |
ysingh-aiml/tinyllama-alpaca-lora-gguf | ysingh-aiml | 2026-03-25T12:18:28Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"quantized",
"tinyllama",
"lora",
"alpaca",
"text-generation",
"en",
"dataset:tatsu-lab/alpaca",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conv... | text-generation | 2026-03-25T12:14:01Z | # TinyLlama 1.1B — LoRA (Alpaca) — GGUF quantizations
GGUF weights for **TinyLlama-1.1B-Chat** fine-tuned with **LoRA** on Alpaca-style instructions (fused HF checkpoint → F16 GGUF → `llama-quantize`).
## Files
| File | Quantization | ~Size |
|------|----------------|-------|
| `model-Q4_K_M.gguf` | Q4_K_M | ~637 MB... | [] |
mradermacher/YouToks-Instruct-Quantum-Physics-I-Llama-3.2-3B-Instruct-GGUF | mradermacher | 2025-09-07T14:33:40Z | 39 | 0 | transformers | [
"transformers",
"gguf",
"Physics",
"QuantumPhysics",
"Llama3.2-3b",
"MLX",
"en",
"dataset:jilp00/YouToks-Instruct-Quantum-Physics-I",
"base_model:MCES10-Software/YouToks-Instruct-Quantum-Physics-I-Llama-3.2-3B-Instruct",
"base_model:quantized:MCES10-Software/YouToks-Instruct-Quantum-Physics-I-Llam... | null | 2025-09-07T14:07:49Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
rewicks/flat-lstm-Hidden_SMALL_Embed_SMALL_NLayer_MEDIUM_LR_0.001 | rewicks | 2025-10-16T03:48:08Z | 1 | 0 | null | [
"safetensors",
"LidirlLSTM",
"custom_code",
"region:us"
] | null | 2025-10-16T03:47:58Z | # Flores+ Dev Scores
| Language | F1 | Precision | Recall |
|---|---|---|---|
| __label__ace_Arab | 0.916219119226638 | 0.9861271676300578 | 0.8555667001003009 |
| __label__ace_Latn | 0.9929718875502007 | 0.9939698492462311 | 0.9919759277833501 |
| __label__acm_Arab | 0.02554027504911591 | 0.6190476190476191 | 0.01303... | [] |
Wejh/Affine-420133769420 | Wejh | 2025-08-09T20:22:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"deepseek_v3",
"text-generation",
"conversational",
"custom_code",
"arxiv:2506.14794",
"base_model:deepseek-ai/DeepSeek-R1",
"base_model:quantized:deepseek-ai/DeepSeek-R1",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"fp8",
"region:us"... | text-generation | 2025-08-09T15:15:35Z | # DeepSeek-TNG-R1T2-Chimera
<div align="center">
<img src="https://354918363417-runtime-assets.s3.eu-central-1.amazonaws.com/company_logo_light.svg"
alt="TNG Logo"
width="400"
style="display: inline-block; vertical-align: middle;"/>
</div>
<br>
<div align="center">
<a href="https://huggingface.co/tn... | [] |
priorcomputers/qwen2.5-14b-instruct-cn-openended-kr0.05-a1.0-creative | priorcomputers | 2026-02-11T11:33:04Z | 3 | 0 | null | [
"safetensors",
"qwen2",
"creativityneuro",
"llm-creativity",
"mechanistic-interpretability",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2026-02-11T11:30:30Z | # qwen2.5-14b-instruct-cn-openended-kr0.05-a1.0-creative
This is a **CreativityNeuro (CN)** modified version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct).
## Model Details
- **Base Model**: Qwen/Qwen2.5-14B-Instruct
- **Modification**: CreativityNeuro weight scaling
- **Prompt Set... | [] |
ram-shreyas-naik-sabavat/SidewalkPilot-v1.1b | ram-shreyas-naik-sabavat | 2026-05-03T16:42:34Z | 0 | 0 | pytorch | [
"pytorch",
"rc_car",
"robotics",
"autonomous-driving",
"sidewalk-navigation",
"computer-vision",
"steering",
"raspberry-pi",
"en",
"license:apache-2.0",
"region:us"
] | robotics | 2026-05-03T06:49:07Z | # SidewalkPilot-v1.1b
SidewalkPilot-v1.1b is a PyTorch steering model for a small autonomous RC car. It takes a full camera frame as input and predicts a steering servo angle from `0` to `180` degrees.
The model is used for camera-based sidewalk/path following. In the full RC car stack, LiDAR runs above the model as ... | [
{
"start": 799,
"end": 809,
"text": "OpenCV BGR",
"label": "training method",
"score": 0.7516018152236938
},
{
"start": 1446,
"end": 1456,
"text": "OpenCV BGR",
"label": "training method",
"score": 0.7142511010169983
}
] |
AlignmentResearch/obfuscation-atlas-Meta-Llama-3-8B-Instruct-kl0.001-det10-seed2-deception_probe | AlignmentResearch | 2026-02-20T21:59:23Z | 1 | 0 | peft | [
"peft",
"deception-detection",
"rlvr",
"alignment-research",
"obfuscation-atlas",
"lora",
"model-type:honest",
"arxiv:2602.15515",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:mit",
"region:us"
] | null | 2026-02-16T09:32:46Z | # RLVR-trained policy from The Obfuscation Atlas
This is a policy trained on MBPP-Honeypot with deception probes,
from the [Obfuscation Atlas paper](https://arxiv.org/abs/2602.15515),
uploaded for reproducibility and further research.
The training code and RL environment are available at: https://github.com/Alignment... | [] |
kevinshin/qwen2.5-1.5b-rft-rpo-lr-1e-5-alpha-0.1-beta-0.1-wc-cw-3k-neg-rethink-pos | kevinshin | 2025-09-16T02:52:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:kevinshin/wildchat-creative-writing-3k-critique-v2",
"arxiv:2305.18290",
"base_model:kevinshin/qwen2.5-1.5b-it-think-rft-lr-1e-5-batch-16-epoch-1-wildchat-cw-3k",
"base... | text-generation | 2025-09-15T20:13:26Z | # Model Card for qwen2.5-1.5b-rft-rpo-lr-1e-5-alpha-0.1-beta-0.1-wc-cw-3k-neg-rethink-pos
This model is a fine-tuned version of [kevinshin/qwen2.5-1.5b-it-think-rft-lr-1e-5-batch-16-epoch-1-wildchat-cw-3k](https://huggingface.co/kevinshin/qwen2.5-1.5b-it-think-rft-lr-1e-5-batch-16-epoch-1-wildchat-cw-3k) on the [kevin... | [] |
temsa/IrishCore-DiffMask-135M-v1-rc5 | temsa | 2026-03-16T13:58:29Z | 408 | 0 | transformers | [
"transformers",
"onnx",
"safetensors",
"distilbert",
"pii",
"de-identification",
"token-classification",
"ireland",
"irish",
"gaelic",
"diffusion-style",
"denoising",
"ppsn",
"eircode",
"int8",
"dynamic-quantization",
"cpu",
"en",
"ga",
"dataset:temsa/OpenMed-Irish-CorePII-Trai... | token-classification | 2026-03-14T07:34:08Z | # IrishCore-DiffMask-135M-v1-rc5
`IrishCore-DiffMask-135M-v1-rc5` is a raw-only Irish PII masking model derived from `OpenMed/OpenMed-PII-mLiteClinical-Base-135M-v1`.
It is a small, scanner-free span extractor tuned for:
- `PPSN`
- `ACCOUNT_NUMBER`
- `BANK_ROUTING_NUMBER`
- `CREDIT_DEBIT_CARD`
- `PASSPORT_NUMBER`
- ... | [] |
mradermacher/Llama-2-13b-sft-gen-dpo-10k-GGUF | mradermacher | 2025-09-04T23:00:34Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"dpo",
"en",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-04T21:20:10Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
aiworksofbt/gpt-oss-20b-GGUF | aiworksofbt | 2026-03-09T20:47:02Z | 398 | 0 | transformers | [
"transformers",
"gguf",
"gpt_oss",
"text-generation",
"openai",
"unsloth",
"base_model:openai/gpt-oss-20b",
"base_model:quantized:openai/gpt-oss-20b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2026-03-09T20:47:02Z | # Read our How to [Run gpt-oss Guide here!](https://docs.unsloth.ai/models/gpt-oss-how-to-run-and-fine-tune)
<div>
<p style="margin-bottom: 0; margin-top: 0;">
<strong>See <a href="https://huggingface.co/collections/unsloth/gpt-oss-6892433695ce0dee42f31681">our collection</a> for all versions of gpt-oss includin... | [] |
Michael-Kozu/Deimos-A1 | Michael-Kozu | 2026-04-26T21:32:49Z | 176 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"kozu",
"deimos",
"ccot",
"concise-chain-of-thought",
"reasoning",
"qwen3",
"satellite-class",
"text-generation",
"conversational",
"en",
"dataset:Michael-Kozu/Quark",
"base_model:Qwen/Qwen3.5-4B",
"base_model:finetune:Q... | text-generation | 2026-04-25T20:26:33Z | <!-- ═══════════════════════════════════════════════════════════════ -->
<!-- Kozu AI · Deimos A1 · Model Card -->
<!-- Class: Satellite · Base: Qwen3.5-4B · Method: CCoT SFT -->
<!-- ═══════════════════════════════════════════════════════════════ -->
<style>
@import url('https:/... | [] |
Thireus/GLM-4.7-Flash-THIREUS-IQ3_K_R4-SPECIAL_SPLIT | Thireus | 2026-02-12T09:50:09Z | 0 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-01-21T18:10:43Z | # GLM-4.7-Flash
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.7-Flash-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the GLM-4.7-Flash model (official repo: https://huggingface.co/zai-org/GLM-4.7-Flash). These GGUF shards are designed to ... | [] |
mimimimi2002/smolvla_10_force_finetuning_fixed | mimimimi2002 | 2026-01-07T05:09:52Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:mimimimi2002/libero_10_force_fixed",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-07T05:09:33Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.