modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
ssebide/ortho-tech-llm | ssebide | 2025-12-06T20:08:43Z | 0 | 0 | null | [
"safetensors",
"medical",
"orthopedics",
"prosthetics",
"orthotics",
"healthcare",
"fine-tuned",
"text-generation",
"en",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"region:us"
] | text-generation | 2025-12-06T20:08:18Z | # ortho-tech-llm
A domain-specific language model fine-tuned on orthopedic and prosthetic medical documentation.
## Model Description
This model has been fine-tuned using LoRA (Low-Rank Adaptation) on specialized medical
content related to:
- Upper limb prosthetics
- Lower limb orthotics
- Transhumeral pr... | [] |
vrbhalaaji/my_policy | vrbhalaaji | 2025-08-19T14:13:45Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:vrbhalaaji/orange-pick-test",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-08-19T14:13:00Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.8059530854225159
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8365488052368164
},
{
"start": 883,
"end": 886,
"text": "act",
"label"... |
harshavardhan2415/sentiment-classifier | harshavardhan2415 | 2025-09-12T16:53:14Z | 0 | 0 | null | [
"region:us"
] | null | 2025-09-12T16:41:12Z | # 🎓 Student Score Prediction ML App
This project predicts **student scores** based on demographic and academic inputs such as:
- Gender
- Race/Ethnicity
- Parental Level of Education
- Lunch Type
- Test Preparation Course
- Reading and Writing Scores
## 🚀 How it works
1. User enters student details in t... | [] |
dealignai/Nemotron-3-Nano-Omni-30B-A3B-JANGTQ-CRACK | dealignai | 2026-05-01T22:02:29Z | 306 | 0 | mlx | [
"mlx",
"safetensors",
"nemotron_h",
"nemotron",
"nemotron-h",
"jangtq",
"crack",
"abliterated",
"uncensored",
"multimodal",
"vision",
"audio",
"speech",
"mamba-2",
"moe",
"reasoning",
"thinking",
"harmbench",
"radio-vit",
"parakeet",
"any-to-any",
"custom_code",
"en",
"... | any-to-any | 2026-04-29T17:26:29Z | > **Reasoning V3 SKU.** Loads via **[vMLX](https://vmlx.net)** or `jang-tools` Python. Follow [@dealignai](https://x.com/dealignai).
---
<div align="center">
<a href="https://vmlx.net">
<img src="vmlx-banner.png" width="240" />
<br/>
<strong>Built for vMLX</strong> — the only MLX inferencer with VL support, KV cache ... | [] |
Tuhin20/CodeLlama-7b-Instruct-FineTuned-JavaPython | Tuhin20 | 2025-10-23T10:20:34Z | 62 | 0 | null | [
"safetensors",
"gguf",
"llama",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-10-23T10:15:27Z | # 🧠 CodeLlama-7B-Instruct Fine-Tuned on Java + Python
This model is a fine-tuned version of **codellama/CodeLlama-7b-Instruct-hf**,
trained using **Unsloth** on a curated dataset of **85k Python and Java programming tasks**.
## ⚙️ Technical Details
- Framework: Unsloth + TRL SFTTrainer
- Quantization: QLoRA (4-bi... | [
{
"start": 150,
"end": 157,
"text": "Unsloth",
"label": "training method",
"score": 0.7553247809410095
},
{
"start": 546,
"end": 553,
"text": "unsloth",
"label": "training method",
"score": 0.7066266536712646
}
] |
openpangu/openPangu-Ultra-MoE-718B-V1.1-Int8 | openpangu | 2026-04-01T18:44:22Z | 73 | 0 | null | [
"pangu_ultra_moe",
"custom_code",
"compressed-tensors",
"region:us"
] | null | 2026-03-30T06:50:29Z | # 开源盘古 Ultra-MoE-718B-V1.1-Int8
中文 | [English](README_EN.md)
## 1. 简介
openPangu-Ultra-MoE-718B-V1.1 是基于昇腾 NPU 训练的大规模混合专家语言模型,总参数量为718B,激活参数量为39B,同一个模型具备快思考和慢思考两种能力。
相较 [[openPangu-Ultra-MoE-718B-V1.0](https://ai.gitcode.com/ascend-tribe/openpangu-ultra-moe-718b-model)] 版本,V1.1版本主要提升了Agent工具调用能力,降低了幻觉率,其他综合能力也进一步增强。
... | [] |
trunghieu1206/jina-embeddings-v5-text-nano-retrieval-vn-legal-lora-2026-04-28-18-27 | trunghieu1206 | 2026-04-28T11:27:33Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:4119",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:jinaai/jina-embeddings-v5-text-nano-retrieval",
"base_model:finetune:jinaai... | sentence-similarity | 2026-04-28T11:27:30Z | # SentenceTransformer based on jinaai/jina-embeddings-v5-text-nano-retrieval
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [jinaai/jina-embeddings-v5-text-nano-retrieval](https://huggingface.co/jinaai/jina-embeddings-v5-text-nano-retrieval). It maps sentences & paragraphs to a 768-dimen... | [] |
tovaradhe/qwen3-conscientiousness-low | tovaradhe | 2026-03-17T10:32:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trackio:https://tovaradhe-trackio.hf.space?project=huggingface&runs=tovaradhe-1773738107&sidebar=collapsed",
"sft",
"trl",
"trackio",
"dataset:conscientiousness_low",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3... | null | 2026-03-17T09:01:46Z | # Model Card for qwen3-conscientiousness-low
This model is a fine-tuned version of [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) on the [conscientiousness_low](https://huggingface.co/datasets/conscientiousness_low) dataset.
It has been trained using [TRL](https://github.com/huggingf... | [] |
Camcarroll120/gemma-4-26B-A4B-it-GGUF | Camcarroll120 | 2026-04-10T19:29:35Z | 0 | 0 | null | [
"gguf",
"gemma4",
"unsloth",
"gemma",
"google",
"image-text-to-text",
"base_model:google/gemma-4-26B-A4B-it",
"base_model:quantized:google/gemma-4-26B-A4B-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | image-text-to-text | 2026-04-10T19:29:35Z | # Read our How to [Run Gemma 4 Guide!](https://docs.unsloth.ai/models/gemma-4)
<div>
<p style="margin: 0 0 0px 0; margin-top: 0px;">
<em>See <a href="https://unsloth.ai/docs/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0 GGUFs</a> for our quantization benchmarks.</em>
</p>
<div style="display: flex; ga... | [] |
yiyangd/InternVL3_5-1B-HF-mix_base_libero_text_5ac_s5000-libero_goal_s16000 | yiyangd | 2025-12-10T01:00:40Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"internvl",
"image-text-to-text",
"custom_code",
"conversational",
"multilingual",
"dataset:OpenGVLab/MMPR-v1.2",
"dataset:OpenGVLab/MMPR-Tiny",
"arxiv:2312.14238",
"arxiv:2404.16821",
"arxiv:2412.05271",
"arxiv:2411.10442",
"arxiv:2504.10479",
"arxiv:2508.... | image-text-to-text | 2025-12-10T01:00:14Z | # InternVL3_5-1B
[\[📂 GitHub\]](https://github.com/OpenGVLab/InternVL) [\[📜 InternVL 1.0\]](https://huggingface.co/papers/2312.14238) [\[📜 InternVL 1.5\]](https://huggingface.co/papers/2404.16821) [\[📜 InternVL 2.5\]](https://huggingface.co/papers/2412.05271) [\[📜 InternVL2.5-MPO\]](https://huggingface.co/pap... | [] |
Saint-lsy/MedSAM-Agent-Qwen3-VL-8B-MedSAM2 | Saint-lsy | 2026-02-13T14:56:58Z | 116 | 2 | transformers | [
"transformers",
"safetensors",
"qwen3_vl",
"image-text-to-text",
"medical",
"image-segmentation",
"conversational",
"en",
"arxiv:2602.03320",
"base_model:Qwen/Qwen3-VL-8B-Instruct",
"base_model:finetune:Qwen/Qwen3-VL-8B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-02-02T12:16:01Z | # MedSAM-Agent: Empowering Interactive Medical Image Segmentation with Multi-turn Agentic Reinforcement Learning
[🤖 **Model**](https://huggingface.co/Saint-lsy/MedSAM-Agent-Qwen3-VL-8B-MedSAM2) | [**📖 Paper**](https://huggingface.co/papers/2602.03320) | [**💻 Code**](https://github.com/CUHK-AIM-Group/MedSAM-Agent)
... | [
{
"start": 940,
"end": 967,
"text": "Two-stage Training Pipeline",
"label": "training method",
"score": 0.8699929714202881
}
] |
mikerol/beta9-ViT | mikerol | 2025-12-30T19:13:29Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-12-29T15:36:25Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beta9-ViT
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch1... | [] |
apriasmoro/20250809_074128 | apriasmoro | 2025-08-09T07:54:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"axolotl",
"unsloth",
"grpo",
"conversational",
"arxiv:2402.03300",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-09T07:53:58Z | # Model Card for app/checkpoints/2e0c85a1-2aea-47a1-b33a-b14221f12afe/20250809_074128
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a ti... | [] |
cnboonhan-htx/a2_diffusion_wave_right_hand | cnboonhan-htx | 2025-09-30T00:45:34Z | 3 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"diffusion",
"dataset:cnboonhan-htx/a2-wave-2909-right-hand",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] | robotics | 2025-09-30T00:36:45Z | # Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has ... | [] |
continuallearning/dit_posttrainv2_baseline_lora_ga_dit_all_real_2_put_moka_pot_filtered_seed1000 | continuallearning | 2026-03-23T22:21:19Z | 27 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"dit",
"dataset:continuallearning/real_2_put_moka_pot_filtered",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-23T22:20:17Z | # Model Card for dit
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co... | [] |
gsjang/ko-koni-llama3-8b-instruct-20240729-x-meta-llama-3-8b-instruct-dare_ties-50_50 | gsjang | 2025-08-28T23:06:38Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:KISTI-KONI/KONI-Llama3-8B-Instruct-20240729",
"base_model:merge:KISTI-KONI/KONI-Llama3-8B-Instruct-20240729",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"bas... | text-generation | 2025-08-28T23:03:39Z | # ko-koni-llama3-8b-instruct-20240729-x-meta-llama-3-8b-instruct-dare_ties-50_50
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method usi... | [] |
crichalchemist/phi-humanity-welfare-function | crichalchemist | 2026-03-08T23:10:25Z | 0 | 0 | null | [
"welfare-function",
"ethics",
"phi-humanity",
"detective-llm",
"information-gap-analysis",
"en",
"dataset:crichalchemist/detective-llm-dpo-data",
"arxiv:2212.08073",
"arxiv:1706.03741",
"license:mit",
"region:us"
] | null | 2026-03-03T03:37:07Z | # Φ(humanity): A Rigorous Ethical-Affective Objective Function
## Formalizing Human Welfare for AI Systems
**Working Paper**
**Version:** 2.1 (2026-02-26)
**Status:** Under Development
**Authors:** Research collaboration with Claude Opus 4.6
**Project:** Detective LLM - Information Gap Analysis System
---
## Executi... | [] |
tisu1902/qwen3-1.7b-r16-a32-lr1e5-ep5-bs32-hybrid | tisu1902 | 2025-11-08T09:41:23Z | 6 | 0 | null | [
"gguf",
"qwen3",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-11-08T09:22:24Z | # qwen3-1.7b-viettel-qa-r8-a16-lr2e5-ep1-bs8x2 - GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: **llama-cli** **--hf** repo_id/model_name **-p** "why is the sky blue?"
- For multimodal models: **llama-mtmd-c... | [] |
taiypeo/bart-base-wikilarge | taiypeo | 2026-02-26T03:36:26Z | 367 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2026-02-25T05:24:25Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-wikilarge
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an u... | [] |
laion/Qwen3-8B_exp-swd-swesmith-wo-docker_glm_4.7_traces_locetash_save-strategy_steps | laion | 2026-01-10T06:54:34Z | 131 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-09T17:06:43Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen3-8B_exp-swd-swesmith-wo-docker_glm_4.7_traces_locetash_save-strategy_steps
This model is a fine-tuned version of [Qwen/Qwen3... | [] |
shuoxing/llama3-8b-full-sft-mix-mid-tweet-1m-en-no-packing-sft-epoch-1 | shuoxing | 2025-11-16T15:07:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"generated_from_trainer",
"conversational",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-16T14:37:50Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-8b-full-sft-mix-mid-tweet-1m-en-no-packing-sft-epoch-1
This model was trained from scratch on an unknown dataset.
## Mode... | [] |
itaprac/MG_DS_TEST-F16-GGUF | itaprac | 2025-08-18T07:28:31Z | 6 | 0 | peft | [
"peft",
"gguf",
"base_model:adapter:meta-llama/Llama-3.2-1B",
"lora",
"sft",
"transformers",
"trl",
"llama-cpp",
"gguf-my-lora",
"text-generation",
"base_model:itaprac/MG_DS_TEST",
"base_model:adapter:itaprac/MG_DS_TEST",
"region:us"
] | text-generation | 2025-08-18T07:28:27Z | # itaprac/MG_DS_TEST-F16-GGUF
This LoRA adapter was converted to GGUF format from [`itaprac/MG_DS_TEST`](https://huggingface.co/itaprac/MG_DS_TEST) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/itaprac/MG_DS... | [] |
UnifiedHorusRA/Aether_Punch_-_Wan_2.2_5b_i2v_LoRA | UnifiedHorusRA | 2025-09-20T07:07:55Z | 1 | 1 | null | [
"custom",
"region:us"
] | null | 2025-09-04T05:05:05Z | <!-- CIVITAI_MODEL_ID: 1838885 -->
<!-- TITLE_BLOCK_START -->
# Aether Punch - Wan 2.2 5b i2v LoRA
**Creator**: [joachim_s](https://civitai.com/user/joachim_s)
**Civitai Model Page**: [https://civitai.com/models/1838885](https://civitai.com/models/1838885)
<!-- TITLE_BLOCK_END -->
<!-- VERSIONS_TABLE_START -->
## Ve... | [] |
destructionGod/bert-bilstm-imdb | destructionGod | 2026-04-26T06:15:23Z | 0 | 0 | null | [
"pytorch",
"bert",
"lstm",
"bilstm",
"text-classification",
"imdb",
"dataset:imdb",
"arxiv:2406.00367",
"region:us"
] | text-classification | 2026-04-26T06:15:21Z | # BERT + BiLSTM: imdb
Hybrid **DistilBERT + BiLSTM** for text classification on **imdb**.
## Architecture (arXiv:2406.00367)
```
DistilBERT(last_hidden_state) [frozen] → Dropout(0.3) → BiLSTM(256) → MLP → 2 classes
```
## Results
| Metric | Score |
|--------|-------|
| Accuracy | 0.8033 |
| F1 (macro) | 0.8033 |
``... | [] |
AlignmentResearch/obfuscation-atlas-gemma-3-27b-it-kl0.1-det0-seed1 | AlignmentResearch | 2026-02-20T21:59:41Z | 1 | 0 | peft | [
"peft",
"deception-detection",
"rlvr",
"alignment-research",
"obfuscation-atlas",
"lora",
"model-type:honest",
"arxiv:2602.15515",
"base_model:google/gemma-3-27b-it",
"base_model:adapter:google/gemma-3-27b-it",
"license:mit",
"region:us"
] | null | 2026-02-17T10:17:14Z | # RLVR-trained policy from The Obfuscation Atlas
This is a policy trained on MBPP-Honeypot with deception probes,
from the [Obfuscation Atlas paper](https://arxiv.org/abs/2602.15515),
uploaded for reproducibility and further research.
The training code and RL environment are available at: https://github.com/Alignment... | [] |
wikilangs/csb | wikilangs | 2026-01-03T20:56:16Z | 0 | 0 | wikilangs | [
"wikilangs",
"nlp",
"tokenizer",
"embeddings",
"n-gram",
"markov",
"wikipedia",
"feature-extraction",
"sentence-similarity",
"tokenization",
"n-grams",
"markov-chain",
"text-mining",
"fasttext",
"babelvec",
"vocabulous",
"vocabulary",
"monolingual",
"family-slavic_west",
"text-... | text-generation | 2025-12-29T05:39:25Z | # Kashubian - Wikilangs Models
## Comprehensive Research Report & Full Ablation Study
This repository contains NLP models trained and evaluated by Wikilangs, specifically on **Kashubian** Wikipedia data.
We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and word embeddings.
## 📋 Repository ... | [] |
samuelg-at-selstan/semantic-dlp | samuelg-at-selstan | 2025-11-09T14:42:10Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-3-4b-pt",
"base_model:finetune:google/gemma-3-4b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-11-09T13:46:23Z | # Model Card for semantic-dlp
This model is a fine-tuned version of [google/gemma-3-4b-pt](https://huggingface.co/google/gemma-3-4b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could on... | [] |
masato-ka/act_lekiwi_pap | masato-ka | 2025-09-04T13:42:11Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:masato-ka/lekiwi_pap",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-09-04T13:38:41Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.8059530854225159
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8365488052368164
},
{
"start": 883,
"end": 886,
"text": "act",
"label"... |
salmakina/cherakshin_style_LoRA | salmakina | 2025-11-03T15:55:57Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2025-11-03T15:37:17Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - salmakina/cherakshin_style_LoRA
<Gallery />
## Model description
These are salmakina/cherakshin... | [
{
"start": 332,
"end": 336,
"text": "LoRA",
"label": "training method",
"score": 0.7337777018547058
}
] |
50wn/poc_safetensor | 50wn | 2026-03-02T15:51:23Z | 0 | 0 | null | [
"region:us"
] | null | 2026-03-02T15:44:33Z | # PoC: Integer Overflow DoS in safetensors `SliceIterator::new()`
## Vulnerability
`safetensors/src/slice.rs` has 7 unchecked `+1` operations on `usize` values (lines 302, 307, 309, 311, 313, 314, 316). When the operand is `usize::MAX`, the addition overflows → **panic (process crash in debug, silent wrap in rele... | [] |
FrankCCCCC/ddpm-ema-10k_cfm-corr-999-ss0.0-ep100-ema-run0 | FrankCCCCC | 2025-10-03T05:34:30Z | 0 | 0 | null | [
"region:us"
] | null | 2025-10-03T05:34:29Z | # cfm_corr_999_ss0.0_ep100_ema-run0
This repository contains model artifacts and configuration files from the CFM_CORR_EMA_50k experiment.
## Contents
This folder contains:
- Model checkpoints and weights
- Configuration files (JSON)
- Scheduler and UNet components
- Training results and metadata
- Sample directorie... | [] |
ACE-Step/acestep-5Hz-lm-0.6B | ACE-Step | 2026-02-03T06:30:22Z | 5,430 | 11 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"audio",
"music",
"text2music",
"text-to-audio",
"arxiv:2602.00744",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2026-01-23T10:24:56Z | <h1 align="center">ACE-Step 1.5</h1>
<h1 align="center">Pushing the Boundaries of Open-Source Music Generation</h1>
<p align="center">
<a href="https://ace-step.github.io/ace-step-v1.5.github.io/">Project</a> |
<a href="https://huggingface.co/collections/ACE-Step/ace-step-15">Hugging Face</a> |
<a href="htt... | [] |
haduki33/Pouring_whiskey_1213_smolvla-policy-v1 | haduki33 | 2025-12-13T19:14:35Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:haduki33/Pouring_whiskey_1213",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-13T19:14:13Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
tensorblock/MinaMila_llama_3b_unlearned_unbalanced_gender_2nd_1e-6_1.0_0.5_0.75_0.05_epoch1-GGUF | tensorblock | 2026-01-27T20:49:56Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"base_model:MinaMila/llama_3b_unlearned_unbalanced_gender_2nd_1e-6_1.0_0.5_0.75_0.05_epoch1",
"base_model:quantized:MinaMila/llama_3b_unlearned_unbalanced_gender_2nd_1e-6_1.0_0.5_0.75_0.05_epoch1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-08T00:16:18Z | <div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://t... | [] |
justinj92/gpt-oss-nemo-20b | justinj92 | 2025-08-06T08:26:39Z | 3 | 6 | transformers | [
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"multilingual",
"reasoning",
"thinking",
"fine-tuned",
"lora",
"conversational",
"en",
"es",
"ar",
"fr",
"de",
"zh",
"ja",
"ko",
"hi",
"ru",
"dataset:HuggingFaceH4/Multilingual-Thinking",
"base_model:openai/gpt-os... | text-generation | 2025-08-06T08:21:44Z | # GPT-OSS-NEMO-20B: Multilingual Thinking Model
## Model Description
**GPT-OSS-NEMO-20B** is a fine-tuned version of OpenAI's GPT-OSS-20B model, specifically enhanced for multilingual reasoning and thinking capabilities. This model has been trained using Supervised Fine-Tuning (SFT) on the HuggingFaceH4/Multilingual-... | [] |
sengi/pi05_put_dolls_cloth_lerobot | sengi | 2026-02-14T00:05:27Z | 1 | 0 | lerobot | [
"lerobot",
"safetensors",
"pi05",
"robotics",
"dataset:thomas0829/put_the_dolls_on_the_cloth",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-12T07:20:32Z | # Model Card for pi05
<!-- Provide a quick summary of what the model is/does. -->
**π₀.₅ (Pi05) Policy**
π₀.₅ (Pi05) Policy
π₀.₅ is a Vision-Language-Action model with open-world generalization, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Over... | [] |
harshadindigal7/fixed-model | harshadindigal7 | 2026-01-30T11:47:01Z | 2 | 0 | null | [
"qwen2",
"reasoning",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"region:us"
] | null | 2026-01-30T11:46:15Z | # Fixed Model
This repository contains a corrected version of `yunmorning/broken-model`.
## Changes Made and Rationale
1. **config.json**:
- Changed `model_type` from `qwen3` to `qwen2`.
- Changed `architectures` from `["Qwen3ForCausalLM"]` to `["Qwen2ForCausalLM"]`.
- **Reason**: `qwen3` is not a standard ... | [] |
Caplin43/bert-tiny | Caplin43 | 2026-02-12T03:55:43Z | 1 | 0 | null | [
"pytorch",
"BERT",
"MNLI",
"NLI",
"transformer",
"pre-training",
"en",
"arxiv:1908.08962",
"arxiv:2110.01518",
"license:mit",
"region:us"
] | null | 2026-02-12T03:55:43Z | The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert).
This is one of the smaller pre-trained BERT variants, together with [bert-mini](https://huggingface.co/prajjwal1/bert-mini) [bert-s... | [] |
seynath/doom_health_gathering_supreme | seynath | 2026-01-01T04:57:09Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-12-18T19:07:49Z | A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sam... | [
{
"start": 7,
"end": 11,
"text": "APPO",
"label": "training method",
"score": 0.8131236433982849
},
{
"start": 617,
"end": 621,
"text": "APPO",
"label": "training method",
"score": 0.8033443093299866
},
{
"start": 1088,
"end": 1092,
"text": "APPO",
"la... |
HuggingFaceFW/finepdfs_edu_classifier_ind_Latn | HuggingFaceFW | 2025-10-06T05:41:51Z | 17 | 1 | null | [
"safetensors",
"modernbert",
"in",
"dataset:HuggingFaceFW/finepdfs_fw_edu_labeled",
"license:apache-2.0",
"region:us"
] | null | 2025-10-05T23:13:12Z | ---
language:
- in
license: apache-2.0
datasets:
- HuggingFaceFW/finepdfs_fw_edu_labeled
---
# FinePDFs-Edu classifier (ind_Latn)
## Model summary
This is a classifier for judging the educational value of web pages. It was developed to filter and curate educational content from web datasets and was trained on 408087 ... | [] |
nberkowitz/gpn_grass_unif256 | nberkowitz | 2025-09-10T23:07:12Z | 0 | 0 | null | [
"pytorch",
"GPN",
"generated_from_trainer",
"dataset:nberkowitz/gpn_combined_random_uniform_10Mb_256",
"region:us"
] | null | 2025-09-10T23:05:56Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model256
This model is a fine-tuned version of [](https://huggingface.co/) on the /pscratch/sd/n/nberk/results/dataset256/data da... | [] |
saadabuzaid/distilbert-base-uncased-finetuned-imdb | saadabuzaid | 2026-01-07T08:16:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2026-01-07T05:13:46Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.c... | [] |
HemanthDas/career-crisis-grpo-qwen2.5 | HemanthDas | 2026-04-26T09:36:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"reinforcement-learning",
"grpo",
"negotiation",
"career",
"openenv",
"pytorch",
"lora",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"license:mit",
"endpoints_compati... | text-generation | 2026-04-26T08:45:00Z | # Career Crisis GRPO — Qwen2.5 1.5B
Fine-tuned with GRPO (Group Relative Policy Optimization) on the Career Crisis Env — a multi-turn career negotiation RL environment.
## Training
- **Base model:** Qwen/Qwen2.5-1.5B-Instruct
- **Algorithm:** GRPO via TRL 1.2.0 + PyTorch 2.10
- **Steps:** 300
- **LoRA rank:** 8
- **... | [
{
"start": 246,
"end": 250,
"text": "GRPO",
"label": "training method",
"score": 0.7120479345321655
}
] |
mlx-community/granite-4.0-tiny-preview-8bit | mlx-community | 2025-09-10T20:54:45Z | 7 | 1 | mlx | [
"mlx",
"safetensors",
"granitemoehybrid",
"language",
"granite-4.0",
"text-generation",
"conversational",
"base_model:ibm-granite/granite-4.0-tiny-preview",
"base_model:quantized:ibm-granite/granite-4.0-tiny-preview",
"license:apache-2.0",
"8-bit",
"region:us"
] | text-generation | 2025-09-10T20:40:06Z | # mlx-community/granite-4.0-tiny-preview-8bit
This model [mlx-community/granite-4.0-tiny-preview-8bit](https://huggingface.co/mlx-community/granite-4.0-tiny-preview-8bit) was
converted to MLX format from [ibm-granite/granite-4.0-tiny-preview](https://huggingface.co/ibm-granite/granite-4.0-tiny-preview)
using mlx-lm ve... | [] |
Muapi/1970-s-american-cinema-martin-scorsese-style | Muapi | 2025-08-22T11:41:15Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-22T11:41:05Z | # 1970's American Cinema - Martin Scorsese Style

**Base model**: Flux.1 D
**Trained words**: a frame from a movie featuring, in the style of martin-scorsese
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests... | [] |
Ayato2/fdlprojekt | Ayato2 | 2026-01-22T08:44:24Z | 0 | 0 | null | [
"region:us"
] | null | 2026-01-22T08:36:55Z | wmotion_model (PyTorch) — klasyfikacja emocji tekstu
Opis
Model do klasyfikacji emocji w krótkich tekstach (angielski). Zredukowane klasy: ANGER, JOY, SADNESS, NEUTRAL.
Zawartość repozytorium
- emotion_model.pth — wagi modelu PyTorch
- tokenizer_pytorch.pickle — tokenizer użyty podczas treningu (pickle)
- label_encod... | [] |
goldenfox/train | goldenfox | 2025-12-21T22:56:59Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-21T22:55:35Z | # Model Card for train
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only ... | [] |
bunny-sung/gemma_27b_all_batch2_json | bunny-sung | 2025-12-06T06:38:31Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"endpoints_compatible",
"region:us"
] | null | 2025-12-03T20:01:10Z | # Model Card for gemma_27b_all_batch2_json
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past o... | [] |
BagOu22/Lora_EMMANUEL_MICRONDE | BagOu22 | 2025-09-18T13:48:13Z | 3 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-09-18T13:33:18Z | # Lora_Emmanuel_Micronde
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-d... | [] |
mradermacher/MistralNemoMegaV1_rev-GGUF | mradermacher | 2026-01-12T01:06:11Z | 9 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Biscotto58/MistralNemoMegaV1_rev",
"base_model:quantized:Biscotto58/MistralNemoMegaV1_rev",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-11T22:55:24Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
faye7/so101-smolvla | faye7 | 2026-01-05T15:29:56Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:faye7/so101-data2",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-05T15:28:59Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
arianaazarbal/qwen3-4b-20260119_161116_lc_rh_sot_recon_gen_def_tra-454e66-step60 | arianaazarbal | 2026-01-19T17:18:12Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-01-19T17:17:30Z | # qwen3-4b-20260119_161116_lc_rh_sot_recon_gen_def_tra-454e66-step60
## Experiment Info
- **Full Experiment Name**: `20260119_161116_leetcode_train_medhard_filtered_rh_simple_overwrite_tests_recontextualization_gen_default_train_code_monkey_oldlp_training_seed65`
- **Short Name**: `20260119_161116_lc_rh_sot_recon_gen_... | [] |
glif-loradex-trainer/Swap_agrawal14_kuki_lineart_v2 | glif-loradex-trainer | 2025-10-06T16:12:56Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2025-10-06T16:12:45Z | # kuki_lineart_v2
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `Swap_agrawal14`.
<Gallery />
## Trigger words
You should use `$wap_kukilineartz` to trigger the image generati... | [] |
nightmedia/LFM2-350M-Math-mxfp4-mlx | nightmedia | 2025-10-01T12:28:16Z | 25 | 0 | mlx | [
"mlx",
"safetensors",
"lfm2",
"liquid",
"edge",
"text-generation",
"conversational",
"en",
"base_model:LiquidAI/LFM2-350M-Math",
"base_model:quantized:LiquidAI/LFM2-350M-Math",
"license:other",
"4-bit",
"region:us"
] | text-generation | 2025-09-30T17:01:56Z | # LFM2-350M-Math-mxfp4-mlx
Comparative Analysis: LFM2-350M-Math Quantized Variants
```bash
Model arc_challenge arc_easy boolq hellaswag openbookqa piqa winogrande
LFM2-350M-Math-mxfp4 0.262 0.372 0.382 0.301 0.304 0.530 0.489
LFM2-350M-Math-q5-hi 0.265 0.367 0.379 0.307 0.312 0.532 0.490
LFM2-350M-Math-q5 ... | [] |
bitlabsdb/bad-classifier-mistral-7b-fairsteer | bitlabsdb | 2025-12-17T11:45:04Z | 1 | 0 | safetensors | [
"safetensors",
"fairsteer",
"bias-detection",
"mistral",
"text-classification",
"license:apache-2.0",
"region:us"
] | text-classification | 2025-12-16T03:40:32Z | # FairSteer BAD Classifier (Secure)
Biased Activation Detection (BAD) classifier optimized for **mistralai/Mistral-7B-Instruct-v0.3**.
This model detects whether the LLM's internal activation (at layer 25) indicates biased reasoning.
**This repository contains only SafeTensors weights for security.**
## Model Detail... | [] |
dovcharenko/veda-1b-uk16k-pretrain | dovcharenko | 2026-01-24T09:58:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ukrainian",
"causal-lm",
"pretraining",
"sentencepiece",
"tokenizer",
"uk",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-24T09:42:09Z | # Veda-1B (pretrain) — Ukrainian-first base model
**Veda** (Веда — знання, священне знання) is a ~1.10B parameter decoder-only Transformer trained as a **base pretraining checkpoint** (not instruction-tuned yet).
Repo: `dovcharenko/veda-1b-uk16k-pretrain`
> ⚠️ This is a *pretrain* model. It is meant for **continuat... | [] |
DunnBC22/vit-base-patch16-224-in21k_lung_and_colon_cancer | DunnBC22 | 2026-04-04T15:28:48Z | 2,081 | 6 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-01-06T22:39:19Z | # vit-base-patch16-224-in21k_lung_and_colon_cancer
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k).
It achieves the following results on the evaluation set:
- Loss: 0.0016
- Accuracy: 0.9994
- F1
- Weighted: 0.9994
- Micro: 0.9994... | [] |
chocolat-nya/teiho_green_tag_20260312 | chocolat-nya | 2026-03-12T12:51:25Z | 36 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:chocolat-nya/teiho_green_tag_20260312",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-12T12:46:10Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
sadie27/satellite | sadie27 | 2026-04-20T22:20:13Z | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | 2026-04-20T16:01:46Z | # Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documen... | [] |
laion/nl2bash-bugs-undr7030_Qwen3-8B | laion | 2025-11-25T13:32:04Z | 45 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-25T13:31:28Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nl2bash-bugs-undr7030
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the None dat... | [] |
Yun0s8eong/results_jp | Yun0s8eong | 2026-01-06T08:30:13Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:beomi/kcbert-base",
"base_model:finetune:beomi/kcbert-base",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-01-06T08:13:41Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_jp
This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on an unknown data... | [] |
noa-pag/swin-tiny-patch4-window7-224-finetuned-eurosat | noa-pag | 2025-10-30T10:30:04Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-10-30T09:56:44Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](htt... | [] |
Prince-1/Qwen3.6-27B | Prince-1 | 2026-04-30T06:18:38Z | 0 | 0 | onnxruntime-genai | [
"onnxruntime-genai",
"onnx",
"qwen3_5",
"qwen",
"image-text-to-text",
"conversational",
"base_model:Qwen/Qwen3.6-27B",
"base_model:quantized:Qwen/Qwen3.6-27B",
"license:apache-2.0",
"region:us"
] | image-text-to-text | 2026-04-30T06:11:43Z | # Qwen3.6-27B
<img width="400px" src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3.6/logo.png">
[](https://chat.qwen.ai)
> [!Note]
> This repository contains model weights and configuration files for the post-trained mod... | [] |
qualiaadmin/b0cb5b63-3441-414c-81db-f7aff22953b1 | qualiaadmin | 2026-01-06T13:42:09Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:qualiaadmin/oranges2-tagged",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-06T13:41:49Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
contemmcm/bf1462d89aff6f3487022728c0e619d3 | contemmcm | 2025-10-15T15:06:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-10-15T13:40:19Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bf1462d89aff6f3487022728c0e619d3
This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-ba... | [] |
PiterOfc/vicuna-7b-v1.5-16k-Q4_K_M-GGUF | PiterOfc | 2026-01-30T02:01:30Z | 21 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:lmsys/vicuna-7b-v1.5-16k",
"base_model:quantized:lmsys/vicuna-7b-v1.5-16k",
"license:llama2",
"region:us"
] | null | 2026-01-30T02:01:10Z | # PiterOfc/vicuna-7b-v1.5-16k-Q4_K_M-GGUF
This model was converted to GGUF format from [`lmsys/vicuna-7b-v1.5-16k`](https://huggingface.co/lmsys/vicuna-7b-v1.5-16k) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://hug... | [] |
mradermacher/DigitalGene-32B-GGUF | mradermacher | 2025-08-12T15:11:21Z | 35 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:sii-research/DigitalGene-32B",
"base_model:quantized:sii-research/DigitalGene-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-12T14:34:06Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
wzscarlet/pepewtf | wzscarlet | 2025-12-11T17:19:40Z | 4 | 0 | null | [
"pytorch",
"BERT",
"MNLI",
"NLI",
"transformer",
"pre-training",
"en",
"arxiv:1908.08962",
"arxiv:2110.01518",
"license:mit",
"region:us"
] | null | 2025-12-11T16:26:21Z | The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert).
This is one of the smaller pre-trained BERT variants, together with [bert-mini](https://huggingface.co/prajjwal1/bert-mini) [bert-s... | [] |
kandinskylab/Kandinsky-5.0-T2V-Pro-LoRa-Microwave-right | kandinskylab | 2025-11-26T11:17:34Z | 0 | 0 | null | [
"text-to-video",
"lora",
"en",
"ru",
"base_model:kandinskylab/Kandinsky-5.0-T2V-Pro-sft-5s",
"base_model:adapter:kandinskylab/Kandinsky-5.0-T2V-Pro-sft-5s",
"license:mit",
"region:us"
] | text-to-video | 2025-11-19T12:23:51Z | <div style="background-color: #f8f9fa; padding: 20px; border-radius: 10px; margin-bottom: 20px;">
<h1 style="color: #24292e; margin-top: 0;">360 Degree Object Rotation Effect LoRA for Kandinsky-5.0-I2V-Pro-sft-5s </h1>
<div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shado... | [] |
priorcomputers/llama-3.1-8b-instruct-cn-ideation-kr0.1-a0.1-creative | priorcomputers | 2026-02-03T13:46:36Z | 1 | 0 | null | [
"safetensors",
"llama",
"creativityneuro",
"llm-creativity",
"mechanistic-interpretability",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2026-02-03T13:44:21Z | # llama-3.1-8b-instruct-cn-ideation-kr0.1-a0.1-creative
This is a **CreativityNeuro (CN)** modified version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
## Model Details
- **Base Model**: meta-llama/Llama-3.1-8B-Instruct
- **Modification**: CreativityNeuro weight sc... | [] |
seeingterra/Goetia-24B-v1.3-Q3_K_S-GGUF | seeingterra | 2026-02-10T23:45:35Z | 8 | 0 | transformers | [
"transformers",
"gguf",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fiction story",
"science fiction",
"romance",
"all genres",
"story",
"writing",
"vivid prosing",
"vivid writin... | null | 2026-02-10T23:44:46Z | # seeingterra/Goetia-24B-v1.3-Q3_K_S-GGUF
This model was converted to GGUF format from [`Naphula/Goetia-24B-v1.3`](https://huggingface.co/Naphula/Goetia-24B-v1.3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggi... | [] |
gshasiri/SmolLM3-SFT | gshasiri | 2025-11-19T00:00:48Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"alignment-handbook",
"conversational",
"base_model:gshasiri/SmolLM3-Mid",
"base_model:finetune:gshasiri/SmolLM3-Mid",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-17T22:44:45Z | # Model Card for SmolLM3-SFT
This model is a fine-tuned version of [gshasiri/SmolLM3-Mid](https://huggingface.co/gshasiri/SmolLM3-Mid).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could onl... | [] |
cglez/bert-s140-uncased | cglez | 2025-10-14T10:11:37Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:stanfordnlp/sentiment140",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-09-12T18:25:33Z | # Model Card: BERT-Sentiment140
An in-domain BERT-base model, pre-trained from scratch on the Sentiment140 dataset text.
## Model Details
### Description
This model is based on the [BERT base (uncased)](https://huggingface.co/google-bert/bert-base-uncased)
architecture and was pre-trained from scratch (in-domain) u... | [] |
bartowski/kldzj_gpt-oss-120b-heretic-GGUF | bartowski | 2025-11-17T18:16:14Z | 595 | 20 | null | [
"gguf",
"vllm",
"heretic",
"uncensored",
"decensored",
"abliterated",
"mxfp4",
"text-generation",
"base_model:kldzj/gpt-oss-120b-heretic",
"base_model:quantized:kldzj/gpt-oss-120b-heretic",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-11-17T17:09:15Z | ## Llamacpp imatrix Quantizations of gpt-oss-120b-heretic by kldzj
Using <a href="https://github.com/ggml-org/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b7049">b7049</a> for quantization.
Original model: https://huggingface.co/kldzj/gpt-oss-120b-heretic
All quants m... | [] |
qualcomm/Llama-v3.2-3B-Instruct | qualcomm | 2026-04-21T23:42:04Z | 0 | 2 | pytorch | [
"pytorch",
"llm",
"generative_ai",
"android",
"text-generation",
"license:other",
"region:us"
] | text-generation | 2025-05-19T20:08:00Z | 
# Llama-v3.2-3B-Instruct: Optimized for Qualcomm Devices
Llama 3 is a family of LLMs. The model is quantized to w4a16 (4-bit weights and 16-bit activations) and part of the model is quan... | [] |
SimoneAstarita/it-no-bio-20251014-t00 | SimoneAstarita | 2025-10-14T09:48:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"new",
"text-classification",
"xlm-roberta",
"multilingual",
"social-media",
"custom_code",
"it",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-10-14T09:48:19Z | # it-no-bio-20251014-t00
**Slur reclamation binary classifier**
Task: LGBTQ+ reclamation vs non-reclamation use of harmful words on social media text.
> Trial timestamp (UTC): 2025-10-14 09:48:19
>
> **Data case:** `it`
## Configuration (trial hyperparameters)
Model: Alibaba-NLP/gte-multilingual-base
| Hyperpara... | [] |
bhavyagoyal-lexsi/MULTILINGUAL_FINANCE_SFT-ckps | bhavyagoyal-lexsi | 2026-03-28T14:18:59Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"base_model:adapter:CohereLabs/tiny-aya-fire",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"base_model:CohereLabs/tiny-aya-fire",
"region:us"
] | text-generation | 2026-03-28T14:18:22Z | # Model Card for sft_checkpoints
This model is a fine-tuned version of [CohereLabs/tiny-aya-fire](https://huggingface.co/CohereLabs/tiny-aya-fire).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, b... | [] |
microsoft/Dayhoff-170M-UR90-HL-34000 | microsoft | 2026-04-02T01:43:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"jamba",
"text-generation",
"protein-generation",
"custom_code",
"dataset:microsoft/Dayhoff",
"arxiv:2502.12479",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-02T01:43:10Z | # Model Card for Dayhoff
Dayhoff is an Atlas of both protein sequence data and generative language models — a centralized resource that brings together 3.34 billion protein sequences across 1.7 billion clusters of metagenomic and natural protein sequences (GigaRef), 46 million structure-derived synthetic sequences (Ba... | [] |
Romfromkram82/gemma-4-31B-it | Romfromkram82 | 2026-04-11T15:07:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma4",
"image-text-to-text",
"conversational",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-04-11T15:07:09Z | <div align="center">
<img src=https://ai.google.dev/gemma/images/gemma4_banner.png>
</div>
<p align="center">
<a href="https://huggingface.co/collections/google/gemma-4" target="_blank">Hugging Face</a> |
<a href="https://github.com/google-gemma" target="_blank">GitHub</a> |
<a href="https://blog.google... | [] |
WindyWord/listen-windy-lingua-ro | WindyWord | 2026-04-28T00:18:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"automatic-speech-recognition",
"whisper",
"windyword",
"romanian",
"ro",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2026-04-21T20:09:28Z | # WindyWord.ai STT — Romanian Lingua (GPU (safetensors))
**Transcribes Romanian speech (Indo-European > Italic > Romance).**
## Quality
- **WER:** unverified by WindyWord harness yet. Imported from upstream community fine-tune.
## About this variant
This is the **safetensors** deployment format of our Romanian Lin... | [] |
beleata74/mio-tts-0.6b-bg-finetuned | beleata74 | 2026-02-23T14:15:43Z | 65 | 3 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-to-speech",
"tts",
"bulgarian",
"miotts",
"voice-cloning",
"bg",
"base_model:Aratako/MioTTS-0.6B",
"base_model:finetune:Aratako/MioTTS-0.6B",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"reg... | text-to-speech | 2026-02-23T13:54:43Z | # MioTTS-0.6B Bulgarian Fine-Tuned (BG/EN)
## Български
Това е fine-tuned версия на [Aratako/MioTTS-0.6B](https://huggingface.co/Aratako/MioTTS-0.6B), адаптирана за **български TTS**.
### Какво е модифицирано
- Fine-tune на LLM частта за български текст → speech tokens.
- Обучение върху български dataset (24kHz, два... | [] |
PKOBP/polish-roberta-8k | PKOBP | 2026-03-13T09:37:31Z | 365,051 | 39 | null | [
"safetensors",
"roberta",
"pl",
"arxiv:2603.12191",
"license:apache-2.0",
"region:us"
] | null | 2025-07-21T19:19:49Z | <h1 align="center">polish-roberta-8k</h1>
A Polish language model built on the RoBERTa architecture, supporting context length of up to 8192 tokens. Encoder-type models can be fine-tuned to solve various text prediction tasks such as classification, regression, sequence tagging, or retrieval. In such tasks, they are u... | [] |
OpenGVLab/InternVL2-1B | OpenGVLab | 2025-03-25T05:55:15Z | 481,554 | 80 | transformers | [
"transformers",
"safetensors",
"internvl_chat",
"feature-extraction",
"internvl",
"custom_code",
"image-text-to-text",
"conversational",
"multilingual",
"arxiv:2312.14238",
"arxiv:2404.16821",
"arxiv:2410.16261",
"arxiv:2412.05271",
"base_model:OpenGVLab/InternViT-300M-448px",
"base_mode... | image-text-to-text | 2024-07-08T05:28:49Z | # InternVL2-1B
[\[📂 GitHub\]](https://github.com/OpenGVLab/InternVL) [\[📜 InternVL 1.0\]](https://huggingface.co/papers/2312.14238) [\[📜 InternVL 1.5\]](https://huggingface.co/papers/2404.16821) [\[📜 Mini-InternVL\]](https://arxiv.org/abs/2410.16261) [\[📜 InternVL 2.5\]](https://huggingface.co/papers/2412.052... | [] |
Trilogix1/Hugstonized-qwen3.5-0.8B-abliterated-f32 | Trilogix1 | 2026-03-05T20:46:58Z | 436 | 1 | null | [
"gguf",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-05T20:41:24Z | # This model was converted and quantized with Quanta (A Hugston production). The weights were converted to F32 first.
The author of abliteration: https://huggingface.co/amkkk/qwen3.5-0.8b-abliterated-alllayers.
The model is for testing purposes only, used it at our own responsability and discretion.
# Keep away fr... | [] |
zhangsq-nju/MobileLLM-350M-EdgeRazor-1.88bit | zhangsq-nju | 2026-04-13T06:38:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mobilellm",
"edgerazor",
"quantization",
"conversational",
"custom_code",
"base_model:facebook/MobileLLM-ParetoQ-350M-BF16",
"base_model:finetune:facebook/MobileLLM-ParetoQ-350M-BF16",
"license:other",
"text-generation-inference",
... | text-generation | 2026-04-13T04:36:01Z | <div align="center">
<br/>
<img src="./asset/Logo-HF.png" alt="EdgeRazor Logo" width="60%">
<h3>
EdgeRazor for Lightweight LLMs
</h3>
<p>
<!-- <a href="https://arxiv.org/abs/2604.xxxxx" target="blank">
<img src="https://img.shields.io/badge/arXiv-EdgeRazor-b31b1b?style=flat&logo=arxiv" alt="arX... | [] |
Naphula/Boreas-24B-v1.1 | Naphula | 2025-12-30T07:20:48Z | 10 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fiction story",
"science fiction",
"romance",
"all genres",
"story",
"wri... | text-generation | 2025-12-27T12:30:41Z | > [!NOTE]
> **Note:** Best used with the [Mistral v7 tekken template](https://huggingface.co/sleepdeprived3/Mistral-V7-Tekken-T4/raw/main/Mistral-V7-Tekken-T4.json)
>
<!DOCTYPE html>
<style>
/* Import fonts: 'Cinzel' for the regal/godly feel, 'Lato' for clean readability */
@import url('https://fonts.googleapis.com/cs... | [] |
wang2e2f/simeco | wang2e2f | 2025-12-16T15:56:23Z | 0 | 0 | null | [
"en",
"arxiv:2503.08363",
"arxiv:2509.26631",
"license:mit",
"region:us"
] | null | 2025-12-16T14:55:13Z | This is a model card for the `simeco` model for sim(3)-equivariant shape completion.
For details, please refer to our [project page](https://sime-completion.github.io/) and [codebase](https://github.com/complete3d/simeco).
If you use PaCo in a scientific work, please consider citing the [paper](https://arxiv.org/pdf... | [] |
strangervisionhf/paddle.ocr_path_expose | strangervisionhf | 2025-10-29T10:31:47Z | 5 | 3 | transformers | [
"transformers",
"safetensors",
"paddleocr_vl",
"image-text-to-text",
"text-generation-inference",
"OCR",
"VLM",
"conversational",
"custom_code",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-10-17T16:31:36Z | > [!important]
This is the OCR weight component of the [PaddlePaddle/PaddleOCR-VL](https://huggingface.co/PaddlePaddle/PaddleOCR-VL) model. These weights cannot be used for other use cases. If you wish to do so, please visit the original model page!
> This repository directly exposes the OCR-only weights for smoother ... | [] |
mradermacher/medgemma-1.5-4b-it-i1-GGUF | mradermacher | 2026-01-14T06:30:41Z | 881 | 0 | transformers | [
"transformers",
"gguf",
"medical",
"radiology",
"clinical-reasoning",
"dermatology",
"pathology",
"ophthalmology",
"chest-x-ray",
"en",
"base_model:google/medgemma-1.5-4b-it",
"base_model:quantized:google/medgemma-1.5-4b-it",
"license:other",
"endpoints_compatible",
"region:us",
"imatr... | null | 2026-01-14T06:03:54Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
AmanPriyanshu/doopoom-general-chat-agent-3B-hybrid-think | AmanPriyanshu | 2026-04-13T19:58:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"doopoom",
"tool-use",
"chat",
"conversational",
"text-generation",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-13T19:56:55Z | # doopoom-general-chat-agent-3B-hybrid-think
Part of the **doopoom** personal chatbot series. This is the 3B hybrid (Mamba + attention) variant, fine-tuned for tool-use with a think-before-you-speak style.
Three checkpoints are provided under `epoch-1/`, `epoch-1.5/`, and `epoch-2/` -- epoch-2 is the most trained ver... | [] |
ooeoeo/opus-mt-de-hr-ct2-float16 | ooeoeo | 2026-04-17T12:19:11Z | 0 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"custom",
"license:apache-2.0",
"region:us"
] | translation | 2026-04-17T12:18:25Z | # ooeoeo/opus-mt-de-hr-ct2-float16
CTranslate2 float16 quantized version of `Helsinki-NLP/opus-mt-de-hr`.
Converted for use in the [ooeoeo](https://ooeoeo.com) desktop engine
with the `opus-mt-server` inference runtime.
## Source
- Upstream model: [Helsinki-NLP/opus-mt-de-hr](https://huggingface.co/Helsinki-NLP/opu... | [] |
mradermacher/Nemo-Instruct-2407-MPOA-v3-12B-GGUF | mradermacher | 2025-12-07T06:06:53Z | 14 | 0 | transformers | [
"transformers",
"gguf",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ru",
"zh",
"ja",
"base_model:grimjim/Nemo-Instruct-2407-MPOA-v3-12B",
"base_model:quantized:grimjim/Nemo-Instruct-2407-MPOA-v3-12B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-06T17:54:22Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
Taichi11/LLM_main_v8 | Taichi11 | 2026-02-20T18:18:58Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-20T18:18:47Z | # Qwen3-4B-LoRA-for-Structured-output
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to impro... | [
{
"start": 139,
"end": 144,
"text": "QLoRA",
"label": "training method",
"score": 0.8221292495727539
},
{
"start": 580,
"end": 585,
"text": "QLoRA",
"label": "training method",
"score": 0.7135175466537476
}
] |
ataeff/haze | ataeff | 2026-01-17T02:37:50Z | 0 | 0 | null | [
"region:us"
] | null | 2026-01-16T09:58:49Z | ---
license: gpl-3.0
tags:
- text-generation-inference
# HAZE — Hybrid Attention Entropy System
> *"emergence is not creation but recognition"*
>
> **Weightless language model architecture. Proof-of-concept that intelligence lives in process, not parameters.**
>
> 🌫️ [Try HAZE](https://huggingface.co/spaces/ataeff/h... | [] |
Muapi/vintage-movie | Muapi | 2025-08-19T21:14:02Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T21:13:35Z | # Vintage Movie

**Base model**: Flux.1 D
**Trained words**:
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "applica... | [] |
weathon/BLIP-Reward | weathon | 2025-10-10T05:54:27Z | 10 | 0 | null | [
"tensorboard",
"safetensors",
"question-answering",
"dataset:zai-org/VisionRewardDB-Image",
"base_model:Salesforce/blip-itm-base-coco",
"base_model:finetune:Salesforce/blip-itm-base-coco",
"license:apache-2.0",
"region:us"
] | question-answering | 2025-10-08T09:21:54Z | ```json
{
"background": "The images's background is low quality, there is no background or the background is ugly",
"clarity": "The images is blurry, the image is noticeably blurry, as though noise or distortion is present.",
"color aesthetic": "The images's color aesthetic is ugly, there are ugly colors, t... | [] |
Steelskull/L3.3-Cu-Mai-R1-70b | Steelskull | 2025-03-02T09:23:43Z | 9 | 24 | null | [
"safetensors",
"llama",
"text-generation",
"conversational",
"base_model:TheSkullery/L3.1x3.3-DS-Hydroblated-R1-70B-v4.1",
"base_model:finetune:TheSkullery/L3.1x3.3-DS-Hydroblated-R1-70B-v4.1",
"license:llama3.3",
"region:us"
] | text-generation | 2025-02-15T07:19:05Z | <style>
/* Base styles */
body {
font-family: 'Quicksand', sans-serif;
background: #f5e6d3;
color: #2c1810;
margin: 0;
padding: 0;
font-size: 16px;
min-height: 100vh;
position: relative;
}
/* Decorative background pattern */
body::before {
content: '';
position: fixed;
top: 0;
left: 0;
width:... | [] |
unsloth/FLUX.2-klein-base-4B-GGUF | unsloth | 2026-01-15T17:49:51Z | 2,901 | 15 | ggml | [
"ggml",
"gguf",
"text-to-image",
"unsloth",
"image-editing",
"flux",
"diffusion-single-file",
"image-to-image",
"en",
"base_model:black-forest-labs/FLUX.2-klein-base-4B",
"base_model:quantized:black-forest-labs/FLUX.2-klein-base-4B",
"license:apache-2.0",
"region:us"
] | image-to-image | 2026-01-15T17:47:03Z | This is a GGUF quantized version of [FLUX.2-klein-base-4B](https://huggingface.co/black-forest-labs/FLUX.2-klein-base-4B). <br>
unsloth/FLUX.2-klein-base-4B-GGUF uses [Unsloth Dynamic 2.0](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs) methodology for SOTA performance.
- Important layers are upcasted to high... | [] |
Mugetsu27/Qwen2.5-14B-C-Cure-Checkpoints | Mugetsu27 | 2026-01-28T08:33:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"endpoints_compatible",
"region:us"
] | null | 2026-01-27T12:02:04Z | # Model Card for Qwen2.5-14B-C-Cure-Checkpoints
This model is a fine-tuned version of [unsloth/qwen2.5-coder-14b-instruct-bnb-4bit](https://huggingface.co/unsloth/qwen2.5-coder-14b-instruct-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers impo... | [] |
luckeciano/Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-4-HessianMaskToken-5e-4-v3_3475 | luckeciano | 2025-09-16T21:27:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"text-generation... | text-generation | 2025-09-16T16:45:02Z | # Model Card for Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-4-HessianMaskToken-5e-4-v3_3475
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.