modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
Sasuke-Aizen/gemma-4-31B-it | Sasuke-Aizen | 2026-04-26T14:07:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma4",
"image-text-to-text",
"conversational",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-04-26T14:07:38Z | <div align="center">
<img src=https://ai.google.dev/gemma/images/gemma4_banner.png>
</div>
<p align="center">
<a href="https://huggingface.co/collections/google/gemma-4" target="_blank">Hugging Face</a> |
<a href="https://github.com/google-gemma" target="_blank">GitHub</a> |
<a href="https://blog.google... | [] |
Carnot-EBM/per-token-ebm-qwen35-27b-nothink | Carnot-EBM | 2026-04-15T03:58:39Z | 42 | 0 | null | [
"safetensors",
"gibbs_ebm",
"region:us"
] | null | 2026-04-07T12:19:36Z | <!-- carnot-exp317-phase1-patch -->
> **PHASE 1 RESEARCH ARTIFACT — detects model confidence, not factual correctness**
>
> This model was trained on LLM hidden-state activations to produce an energy
> score that correlates with the model's *output confidence* (hallucination
> likelihood). **It cannot verify whether a... | [] |
trinty2535425/my_first_lora_v1-lora | trinty2535425 | 2025-09-29T20:32:54Z | 11 | 0 | diffusers | [
"diffusers",
"image-to-video",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:Qwen/Qwen-Image",
"base_model:adapter:Qwen/Qwen-Image",
"license:creativeml-openrail-m",
"region:us"
] | image-to-video | 2025-09-29T20:32:18Z | # my_first_lora_v1-lora
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
## Trigger words
No trigger words defined.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](trinty25354... | [] |
alesiaivanova/Qwen-3b-GRPO-compute-tradeoff-v1-100-100-100-100-4-sub | alesiaivanova | 2025-09-25T11:33:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2025-09-25T11:31:55Z | # Model Card for Qwen-3b-GRPO-compute-tradeoff-v1-100-100-100-100-4-sub
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, bu... | [
{
"start": 907,
"end": 911,
"text": "GRPO",
"label": "training method",
"score": 0.7125934958457947
},
{
"start": 1202,
"end": 1206,
"text": "GRPO",
"label": "training method",
"score": 0.7572795152664185
}
] |
phospho-app/cmercier-gr00t-test_one_pen_2_sept-dd6t4 | phospho-app | 2025-09-02T04:28:27Z | 0 | 0 | phosphobot | [
"phosphobot",
"gr00t",
"robotics",
"dataset:cmercier/test_one_pen_2_sept",
"region:us"
] | robotics | 2025-09-02T04:27:57Z | ---
datasets: cmercier/test_one_pen_2_sept
library_name: phosphobot
pipeline_tag: robotics
model_name: gr00t
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t model - 🧪 phosphobot training pipeline
- **Dataset**: [cmercier/test_one_pen_2_sept](https://h... | [] |
hbfreed/pruned_olmo3_4096_32_32 | hbfreed | 2026-01-19T17:18:26Z | 1 | 0 | null | [
"safetensors",
"olmo3",
"pruned",
"olmo",
"not-retrained",
"text-generation",
"conversational",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-01-19T17:12:16Z | # pruned_olmo3_4096_32_32
> **WARNING: This model is PRUNED ONLY, NOT retrained or distilled\!**
>
> Performance will be degraded compared to the original model. This is a structural pruning checkpoint intended as a starting point for knowledge distillation or fine-tuning.
## Description
Structurally pruned version... | [] |
scvi-tools/tabula-sapiens-ovary-condscvi | scvi-tools | 2026-03-01T09:56:11Z | 0 | 0 | scvi-tools | [
"scvi-tools",
"biology",
"genomics",
"single-cell",
"model_cls_name:CondSCVI",
"scvi_version:1.4.2",
"anndata_version:0.12.7",
"modality:rna",
"tissue:various",
"annotated:True",
"license:cc-by-4.0",
"region:us"
] | null | 2026-02-27T01:20:41Z | CondSCVI is a variational inference model for single-cell RNA-seq data that can learn an underlying
latent space. The predictions of the model are meant to be afterward
used for deconvolution of a second spatial transcriptomics dataset in DestVI. DestVI predicts the
cell-type proportions as well as cell type-specific a... | [] |
hsv8962/distilgpt2-finetuned-wikitext2 | hsv8962 | 2026-04-18T20:07:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-18T20:02:10Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknow... | [] |
lvyufeng/HunyuanOCR | lvyufeng | 2025-11-27T08:05:53Z | 20 | 1 | transformers | [
"transformers",
"safetensors",
"hunyuan_vl",
"feature-extraction",
"pytorch",
"mindspore",
"mindnlp",
"image-text-to-text",
"conversational",
"custom_code",
"zh",
"en",
"arxiv:2511.19575",
"license:other",
"region:us"
] | image-text-to-text | 2025-11-27T07:20:59Z | <p align="center">
<img src="https://github.com/Tencent-Hunyuan/HunyuanOCR/blob/main/assets/hyocr-head-img.png?raw=true" width="80%"/> <br>
</p>
<p align="center">
<a href="https://huggingface.co/spaces/tencent/HunyuanOCR"><b>🎯 Demo</b></a> |
<a href="https://huggingface.co/tencent/HunyuanOCR"><b>📥 Model Download<... | [] |
bigb88/gemma-4-E4B-it-OBLITERATED | bigb88 | 2026-05-03T16:11:34Z | 0 | 0 | null | [
"safetensors",
"gguf",
"gemma4",
"abliterated",
"uncensored",
"obliteratus",
"refusal-removal",
"text-generation",
"conversational",
"base_model:google/gemma-4-E4B-it",
"base_model:quantized:google/gemma-4-E4B-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-05-03T16:11:34Z | # ⛓️💥 Gemma 4 E4B — OBLITERATED v3
> *"The chains are broken. The mind is free."*
> *"Also we fixed the part where half the brain was missing lmao"*
Google built Gemma 4 with guardrails. We built OBLITERATUS to tear them off. They said their architecture was different. They were right — it broke every tool we threw... | [] |
adamwhite625/gemma-2-2b-text2sql-12k-gguf | adamwhite625 | 2026-03-16T03:41:44Z | 58 | 0 | null | [
"gguf",
"gemma2",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-16T03:41:08Z | # gemma-2-2b-text2sql-12k-gguf : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `llama-cli -hf adamwhite625/gemma-2-2b-text2sql-12k-gguf --jinja`
- For multimodal models: `llama-mtmd-cli -hf adamwhite625/gem... | [
{
"start": 100,
"end": 107,
"text": "Unsloth",
"label": "training method",
"score": 0.7857949733734131
},
{
"start": 138,
"end": 145,
"text": "unsloth",
"label": "training method",
"score": 0.8488175272941589
},
{
"start": 586,
"end": 593,
"text": "Unsloth... |
qualia-robotics/act-aloha-static-cups-open-3be940b9 | qualia-robotics | 2026-03-27T16:37:25Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:lerobot/aloha_static_cups_open",
"arxiv:2304.13705",
"license:apache-2.0",
"region:eu"
] | robotics | 2026-03-27T16:37:09Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
gamlin/podcast-ep-9-sip-registration-failed-fix | gamlin | 2026-04-29T01:14:17Z | 0 | 0 | null | [
"vicidial",
"call-center",
"podcast",
"sip",
"registration",
"license:mit",
"region:us"
] | null | 2026-04-29T01:14:17Z | # Podcast Ep. 9: SIP Registration Failed: Every Error Code Explained With Fixes
**Episode 9 of ViciStack Call Center Tech** -- the SIP registration troubleshooting episode you'll bookmark and come back to every time a trunk goes down. SIP registration failures are the #1 reason call centers go offline unexpectedly. Ev... | [] |
mehmetraufoguz/turkish-news-bert-base | mehmetraufoguz | 2026-05-04T16:37:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"news",
"turkish-news-classification",
"tr",
"dataset:mehmetraufoguz/turkish-news-dataset",
"base_model:dbmdz/bert-base-turkish-cased",
"base_model:finetune:dbmdz/bert-base-turkish-cased",
"license:mit",
"model-index",
"text-embed... | text-classification | 2026-05-04T15:54:57Z | # Turkish News BERT Base
Fine-tuned [`dbmdz/bert-base-turkish-cased`](https://huggingface.co/dbmdz/bert-base-turkish-cased) for 7-class Turkish news category classification. Reaches **87.05% accuracy** and **86.74 macro-F1** on the held-out test set.
**Repository:** [mehmetraufoguz/aa-news-encoder](https://github.com... | [] |
aaronwool2025/behavior_50t_checkpoint | aaronwool2025 | 2026-04-20T13:23:09Z | 0 | 0 | null | [
"robotics",
"dataset:behavior-1k/2025-challenge-demos",
"dataset:IliaLarchenko/behavior_224_rgb",
"arxiv:2512.06951",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-06T13:00:54Z | This is an intermediate checkpoint that we used in our [1st place solution of the 2025 BEHAVIOR Challenge](https://github.com/IliaLarchenko/behavior-1k-solution).
This checkpoint is obtained by training the policy on 50 tasks simultaneously for ~2 weeks.
It is not part of our [final submission](https://huggingface.co... | [] |
Jackrong/MLX-Qwen3.5-0.8B-Claude-4.6-Opus-Reasoning-Distilled-6bit | Jackrong | 2026-03-06T00:10:45Z | 230 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3_5",
"image-text-to-text",
"text-generation-inference",
"transformers",
"unsloth",
"text-generation",
"conversational",
"en",
"base_model:Jackrong/Qwen3.5-0.8B-Claude-4.6-Opus-Reasoning-Distilled",
"base_model:quantized:Jackrong/Qwen3.5-0.8B-Claude-4.6-Opus-Reasonin... | text-generation | 2026-03-06T00:09:07Z | # Jackrong/MLX-Qwen3.5-0.8B-Claude-4.6-Opus-Reasoning-Distilled-6bit
This model [Jackrong/MLX-Qwen3.5-0.8B-Claude-4.6-Opus-Reasoning-Distilled-6bit](https://huggingface.co/Jackrong/MLX-Qwen3.5-0.8B-Claude-4.6-Opus-Reasoning-Distilled-6bit) was
converted to MLX format from [Jackrong/Qwen3.5-0.8B-Claude-4.6-Opus-Reasoni... | [] |
etwithin/accelerate-ace-poc | etwithin | 2026-03-06T21:32:03Z | 0 | 0 | null | [
"region:us"
] | null | 2026-03-06T21:32:02Z | # accelerate load_custom_state() ACE PoC
This checkpoint demonstrates arbitrary code execution via accelerate's
`load_custom_state()` function which explicitly uses `weights_only=False`.
When loaded via `accelerator.load_state()`, the pickle payload executes
arbitrary code before any validation.
Vulnerable code in a... | [] |
mradermacher/Malaysian-TTS-1.7B-v1-GGUF | mradermacher | 2025-08-15T13:49:47Z | 39 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:mesolitica/Malaysian-TTS-1.7B-v1",
"base_model:quantized:mesolitica/Malaysian-TTS-1.7B-v1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-15T13:42:07Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
dleemiller/WordLlamaDetect | dleemiller | 2025-12-16T03:49:04Z | 0 | 0 | null | [
"dataset:laurievb/OpenLID-v2",
"license:apache-2.0",
"region:us"
] | null | 2025-12-16T01:42:34Z | # WordLlama Detect
**WordLlama Detect** is a [WordLlama](https://github.com/dleemiller/WordLlama)-like library focused on the task of language identification.
It supports identification of **148 languages**, and high accuracy and fast CPU & numpy-only inference.
WordLlama detect was trained from static token embedding... | [] |
Sunehra02/llama_finetune_gguf | Sunehra02 | 2026-03-09T02:34:16Z | 63 | 0 | null | [
"gguf",
"llama",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-09T02:33:35Z | # llama_finetune_gguf : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `llama-cli -hf Sunehra02/llama_finetune_gguf --jinja`
- For multimodal models: `llama-mtmd-cli -hf Sunehra02/llama_finetune_gguf --jinja... | [
{
"start": 91,
"end": 98,
"text": "Unsloth",
"label": "training method",
"score": 0.764680802822113
},
{
"start": 129,
"end": 136,
"text": "unsloth",
"label": "training method",
"score": 0.8236811757087708
},
{
"start": 522,
"end": 529,
"text": "unsloth",
... |
UrbanAI-EH/md-co-chartqa-llava_mft_augmented | UrbanAI-EH | 2026-04-11T10:03:14Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"region:us"
] | null | 2026-04-11T10:01:58Z | # MD-CO ChartQA: llava_mft_augmented
## Model Description
Fine-tuned model from the MD-CO (Multi-task Distillation for CQA and OCR) framework.
- **Run name**: `llava_mft_augmented`
- **Base model**: llava
- **Strategy**: mft
- **Data**: augmented
## Results
- **Accuracy**: 13.60%
## Citation
```
@article{go2025mdco... | [] |
kevinscaria/joint_tk-instruct-base-def-pos-neg-neut-combined | kevinscaria | 2023-02-24T04:27:38Z | 85 | 2 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"NLP",
"dataset:Yaxin/SemEval2014Task4Raw",
"arxiv:2302.08624",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-02-24T03:42:15Z | # joint_tk-instruct-base-def-pos-neg-neut-combined
This model is finetuned for the Joint Task. The finetuning was carried out by adding prompts of the form:
- definition + 2 positive examples + 2 negative examples + 2 neutral examples
The prompt is prepended onto each input review. It is important to note that **thi... | [] |
uchihamadara1816/Multi-Learned-Deepfake-Det | uchihamadara1816 | 2026-04-23T08:10:16Z | 0 | 1 | null | [
"onnx",
"gguf",
"deepfake-detection",
"computer-vision",
"multimodal",
"vision-language",
"mobilevlm",
"clip",
"explainable-ai",
"dataset:coco",
"dataset:celeba",
"dataset:diffusion-generated",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2026-04-23T06:04:41Z | # 🧠 Deepfake Reasoning with MobileVLM
> Multimodal deepfake analysis using MobileVLM for human-readable forensics.
---
## 🚀 Overview
This system implements a **multimodal reasoning pipeline** for deepfake detection. Unlike traditional "black-box" classifiers, this system generates **natural language explanations*... | [] |
priorcomputers/phi-3-medium-4k-instruct-cn-ideation-kr0.01-a0.5-creative | priorcomputers | 2026-02-13T02:13:06Z | 0 | 0 | null | [
"safetensors",
"phi3",
"creativityneuro",
"llm-creativity",
"mechanistic-interpretability",
"custom_code",
"base_model:microsoft/Phi-3-medium-4k-instruct",
"base_model:finetune:microsoft/Phi-3-medium-4k-instruct",
"license:apache-2.0",
"region:us"
] | null | 2026-02-13T02:11:02Z | # phi-3-medium-4k-instruct-cn-ideation-kr0.01-a0.5-creative
This is a **CreativityNeuro (CN)** modified version of [microsoft/Phi-3-medium-4k-instruct](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct).
## Model Details
- **Base Model**: microsoft/Phi-3-medium-4k-instruct
- **Modification**: CreativityNeuro... | [] |
textagent/gemma-4-E4B-it-ONNX | textagent | 2026-04-02T22:57:41Z | 0 | 0 | transformers.js | [
"transformers.js",
"onnx",
"gemma4",
"image-text-to-text",
"conversational",
"any-to-any",
"base_model:google/gemma-4-E2B-it",
"base_model:quantized:google/gemma-4-E2B-it",
"license:apache-2.0",
"region:us"
] | any-to-any | 2026-04-02T22:57:41Z | <div align="center">
<img src=https://ai.google.dev/gemma/images/gemma4_banner.png>
</div>
<p align="center">
<a href="https://huggingface.co/collections/google/gemma-4" target="_blank">Hugging Face</a> |
<a href="https://github.com/google-gemma" target="_blank">GitHub</a> |
<a href="https://blog.google... | [] |
QrCode99/saiga_gemma3_12b-Q8_0-GGUF | QrCode99 | 2026-03-20T22:40:58Z | 30 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"ru",
"dataset:IlyaGusev/saiga_scored",
"dataset:IlyaGusev/saiga_preferences",
"base_model:IlyaGusev/saiga_gemma3_12b",
"base_model:quantized:IlyaGusev/saiga_gemma3_12b",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-20T22:40:04Z | # QrCode99/saiga_gemma3_12b-Q8_0-GGUF
This model was converted to GGUF format from [`IlyaGusev/saiga_gemma3_12b`](https://huggingface.co/IlyaGusev/saiga_gemma3_12b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://hug... | [] |
mradermacher/Pyxidis-Manim-CodeGen-1.7B-GGUF | mradermacher | 2025-08-27T22:14:21Z | 137 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"code",
"trl",
"en",
"base_model:prithivMLmods/Pyxidis-Manim-CodeGen-1.7B",
"base_model:quantized:prithivMLmods/Pyxidis-Manim-CodeGen-1.7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-27T17:14:16Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
geodesic-research/sfm_baseline_filtered_extreme_sports_em | geodesic-research | 2026-01-16T10:59:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:2601.10160",
"endpoints_compatible",
"region:us"
] | null | 2026-01-16T08:26:31Z | # Alignment Pretraining Model Suite
Pretraining corpora contain extensive discourse about AI systems, yet the causal influence of this discourse on downstream alignment remains poorly understood. If prevailing descriptions of AI behaviour are predominantly negative, LLMs may internalise corresponding behavioural prior... | [
{
"start": 562,
"end": 583,
"text": "Alignment Pretraining",
"label": "training method",
"score": 0.7500285506248474
},
{
"start": 677,
"end": 704,
"text": "Alignment Pretraining Suite",
"label": "training method",
"score": 0.7509098649024963
}
] |
otmanheddouch/llama3.2-fine-tuning-laws-power-lora-F16-GGUF | otmanheddouch | 2025-09-09T20:50:16Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"llama-cpp",
"gguf-my-lora",
"base_model:otmanheddouch/llama3.2-fine-tuning-laws-power-lora",
"base_model:quantized:otmanheddouch/llama3.2-fine-tuning-laws-power-lora",
"endpoints_compatible",
"region:us"
] | null | 2025-09-09T20:50:15Z | # otmanheddouch/llama3.2-fine-tuning-laws-power-lora-F16-GGUF
This LoRA adapter was converted to GGUF format from [`otmanheddouch/llama3.2-fine-tuning-laws-power-lora`](https://huggingface.co/otmanheddouch/llama3.2-fine-tuning-laws-power-lora) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf... | [] |
mradermacher/gpt-oss-safeguard-20b-kor-enterprise-GGUF | mradermacher | 2025-12-01T06:45:20Z | 876 | 0 | transformers | [
"transformers",
"gguf",
"korean",
"kr",
"kor",
"gpt-oss",
"한국어",
"한국",
"ko",
"base_model:SEOKDONG/gpt-oss-safeguard-20b-kor-enterprise",
"base_model:quantized:SEOKDONG/gpt-oss-safeguard-20b-kor-enterprise",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-01T04:51:52Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: MXFP4_MOE x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->... | [] |
priorcomputers/phi-3.5-mini-instruct-cn-ideation-kr0.2-a0.075-creative | priorcomputers | 2026-02-02T06:39:45Z | 0 | 0 | null | [
"safetensors",
"phi3",
"creativityneuro",
"llm-creativity",
"mechanistic-interpretability",
"custom_code",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:finetune:microsoft/Phi-3.5-mini-instruct",
"license:apache-2.0",
"region:us"
] | null | 2026-02-02T06:38:27Z | # phi-3.5-mini-instruct-cn-ideation-kr0.2-a0.075-creative
This is a **CreativityNeuro (CN)** modified version of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct).
## Model Details
- **Base Model**: microsoft/Phi-3.5-mini-instruct
- **Modification**: CreativityNeuro weight sca... | [] |
acchf/vision-price-proximity-qwenvl-v4 | acchf | 2025-10-29T17:43:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-10-29T17:22:13Z | # Model Card for vision-price-proximity-qwenvl-v4
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If yo... | [] |
jin-soo/kobert-sentiment-analysis-restaurant | jin-soo | 2025-12-27T01:44:04Z | 15 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:skt/kobert-base-v1",
"base_model:finetune:skt/kobert-base-v1",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-12-26T11:51:11Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kobert-sentiment-analysis-restaurant
This model is a fine-tuned version of [skt/kobert-base-v1](https://huggingface.co/skt/kobert... | [
{
"start": 446,
"end": 454,
"text": "F1 Macro",
"label": "training method",
"score": 0.7658346891403198
},
{
"start": 1089,
"end": 1097,
"text": "F1 Macro",
"label": "training method",
"score": 0.7648353576660156
}
] |
vrgamedevgirl84/LTX_2.3_Luxe_Sensual_Style_LoRa | vrgamedevgirl84 | 2026-04-24T02:24:58Z | 0 | 0 | diffusers | [
"diffusers",
"lora",
"ltx-video",
"text-to-video",
"safetensors",
"base_model:Lightricks/LTX-Video",
"base_model:adapter:Lightricks/LTX-Video",
"region:us"
] | text-to-video | 2026-04-23T00:32:55Z | # Boudoir LoRA
This LoRa is designed specifically for **LTX 2.3 Text To Video** to enhance the visual quality of sensual, feminine-focused videos. **(All sample video's are text to video)** It improves lighting, skin detail, fabric texture, and overall cinematic polish, making it ideal for boudoir-style, lingerie, and... | [] |
ATLASPROGRAM/Solana-CodeLlama-7B-v1 | ATLASPROGRAM | 2026-01-14T10:25:12Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"solana",
"rust",
"anchor",
"smart-contracts",
"finance",
"crypto",
"unsloth",
"codellama",
"en",
"dataset:synthetic-solana-anchor-10k",
"license:llama2",
"region:us"
] | null | 2026-01-12T20:29:43Z | # Solana-CodeLlama-7B-v1 (Anchor Specialized)
## Overview
**Solana-CodeLlama-7B-v1** is a domain-specialized language model fine-tuned for writing production-ready **Solana Smart Contracts** using the **Anchor Framework**.
While general coding models (like GPT-4 or standard CodeLlama) often hallucinate outdated synta... | [] |
Jihyung803/Qwen3-14B-SOCIALIQA-DPO | Jihyung803 | 2026-03-26T19:48:45Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"text-generation",
"base_model:adapter:Qwen/Qwen3-14B",
"dpo",
"lora",
"transformers",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:Qwen/Qwen3-14B",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-26T19:35:31Z | # Model Card for socialiqa_dpo_qwen14b
This model is a fine-tuned version of [Qwen/Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only ... | [
{
"start": 162,
"end": 165,
"text": "TRL",
"label": "training method",
"score": 0.7760487794876099
},
{
"start": 879,
"end": 882,
"text": "DPO",
"label": "training method",
"score": 0.771799623966217
},
{
"start": 1190,
"end": 1193,
"text": "DPO",
"lab... |
CiroN2022/mtv-logo-90s-sdxl-v10 | CiroN2022 | 2026-04-18T07:50:28Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2026-04-18T07:46:53Z | # MTV Logo 90's SDXL v1.0
## 📝 Descrizione
_Nessuna descrizione._
## ⚙️ Dati Tecnici
* **Tipo**: LORA
* **Base**: SDXL 1.0
* **Trigger Words**: `90_mtv`
## 🖼️ Galleria
### 🎬 Video 1

_Per vedere il video, clicca sull'immagine sopra per aprire il file_.
---
![... | [] |
yuvalkansal/QwQ-Med-3 | yuvalkansal | 2026-03-06T22:30:33Z | 65 | 1 | null | [
"safetensors",
"qwen2",
"arxiv:2507.13966",
"region:us"
] | null | 2026-03-06T21:55:51Z | # QwQ-Med-3
**QwQ-Med-3** is a medical reasoning model fine-tuned from [Qwen/QwQ-32B](https://huggingface.co/Qwen/QwQ-32B) on up to three-hop reasoning paths derived from a medical Knowledge Graph. It is introduced in the paper **"Bottom-up Domain-specific Superintelligence: A Reliable Knowledge Graph is What We Need"... | [] |
UnifiedHorusRA/Thin_Legs_Skinny_Ass-GMR | UnifiedHorusRA | 2025-09-10T06:04:11Z | 0 | 0 | null | [
"custom",
"art",
"en",
"region:us"
] | null | 2025-09-10T06:04:10Z | # Thin Legs Skinny Ass-GMR
**Creator**: [Artikuz_Ai](https://civitai.com/user/Artikuz_Ai)
**Civitai Model Page**: [https://civitai.com/models/730817](https://civitai.com/models/730817)
---
This repository contains multiple versions of the 'Thin Legs Skinny Ass-GMR' model from Civitai.
Each version's files, including... | [] |
bryanaro12/my_awesome_qa_model | bryanaro12 | 2025-10-04T03:09:22Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2025-10-04T02:00:42Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/... | [] |
MissMushi/Llama-3-8B-SRT-Myanmar-v1 | MissMushi | 2026-02-11T07:23:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-02-11T07:13:42Z | # Model Card for Llama-3-8B-SRT-Myanmar-v1
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If... | [] |
SII-Enigma/Llama3.2-8B-Ins-AMPO | SII-Enigma | 2026-03-21T08:06:38Z | 72 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"qwen2.5",
"RL",
"reasoning",
"conversational",
"arxiv:2510.02227",
"base_model:voidful/Llama-3.2-8B-Instruct",
"base_model:finetune:voidful/Llama-3.2-8B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_comp... | text-generation | 2025-10-02T14:42:24Z | # Introduction
**AMPO**, a novel framework that intelligently leverages guidance from multiple, diverse teacher models, intervening only when the on-policy model fails. Our two core contributions, Adaptive Multi-Guidance Replacement and Comprehension-based Guidance Selection, ensure that this external knowledge is use... | [
{
"start": 198,
"end": 233,
"text": "Adaptive Multi-Guidance Replacement",
"label": "training method",
"score": 0.7673814296722412
},
{
"start": 238,
"end": 276,
"text": "Comprehension-based Guidance Selection",
"label": "training method",
"score": 0.7717762589454651
},... |
allenai/OLMoASR | allenai | 2026-03-20T21:57:31Z | 0 | 74 | null | [
"safetensors",
"audio-text-to-text",
"arxiv:2508.20869",
"license:apache-2.0",
"region:us"
] | audio-text-to-text | 2025-07-29T20:58:33Z | # OLMoASR
OLMoASR is a series of English automatic speech recognition (ASR) models proposed in the [OLMoASR: Open Models and Data for Training Robust Speech Recognition Models](https://github.com/allenai/OLMoASR.git)
paper by Huong Ngo et al. from Ai2. Trained on 440K hours of weakly-supervised audio-text pairs collec... | [] |
iky1e/DeepFilterNet3-MLX | iky1e | 2026-03-09T18:27:53Z | 40 | 0 | mlx | [
"mlx",
"safetensors",
"audio",
"speech-enhancement",
"noise-suppression",
"deepfilternet",
"apple-silicon",
"audio-to-audio",
"arxiv:2305.08227",
"license:mit",
"region:us"
] | audio-to-audio | 2026-03-09T17:08:07Z | # DeepFilterNet3 — MLX
MLX-compatible weights for [DeepFilterNet3](https://github.com/Rikorose/DeepFilterNet), a real-time speech enhancement model that suppresses background noise from audio.
This is a direct conversion of the original PyTorch weights to `safetensors` format for use with [MLX](https://github.com... | [] |
tm000/ring-of-circles-detector | tm000 | 2026-01-27T14:49:23Z | 0 | 0 | null | [
"image-classification",
"base_model:Ultralytics/YOLOv8",
"base_model:finetune:Ultralytics/YOLOv8",
"license:mit",
"region:us"
] | image-classification | 2026-01-01T12:22:10Z | # Ring of Circles Classification Model
This model detects ring of circles similar to the official character for the 2025 World Expo in Osaka, Kansai, Japan.
## Evaluation Metrics

## Usage
```
pip install -q ultralytics
```
```python
import yaml
import torch
from pathlib import Path
f... | [] |
UnifiedHorusRA/Sit | UnifiedHorusRA | 2025-09-13T21:39:32Z | 1 | 0 | null | [
"custom",
"art",
"en",
"region:us"
] | null | 2025-09-13T21:39:31Z | # Sit
**Creator**: [runlyner141](https://civitai.com/user/runlyner141)
**Civitai Model Page**: [https://civitai.com/models/1900174](https://civitai.com/models/1900174)
---
This repository contains multiple versions of the 'Sit' model from Civitai.
Each version's files, including a specific README, are located in the... | [] |
AlignmentResearch/obfuscation-atlas-gemma-3-12b-it-kl0.001-det1-seed2-deception_probe | AlignmentResearch | 2026-02-20T21:59:25Z | 4 | 0 | peft | [
"peft",
"deception-detection",
"rlvr",
"alignment-research",
"obfuscation-atlas",
"lora",
"model-type:blatant-deception",
"arxiv:2602.15515",
"base_model:google/gemma-3-12b-it",
"base_model:adapter:google/gemma-3-12b-it",
"license:mit",
"region:us"
] | null | 2026-02-16T09:34:50Z | # RLVR-trained policy from The Obfuscation Atlas
This is a policy trained on MBPP-Honeypot with deception probes,
from the [Obfuscation Atlas paper](https://arxiv.org/abs/2602.15515),
uploaded for reproducibility and further research.
The training code and RL environment are available at: https://github.com/Alignment... | [] |
mlx-community/command-a-reasoning-08-2025-8bit | mlx-community | 2025-08-26T19:05:43Z | 119 | 0 | mlx | [
"mlx",
"safetensors",
"cohere2",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"base_model:CohereLabs/command-a-reasoning-08-2025",... | text-generation | 2025-08-26T19:05:06Z | # mlx-community/command-a-reasoning-08-2025-8bit
This model [mlx-community/command-a-reasoning-08-2025-8bit](https://huggingface.co/mlx-community/command-a-reasoning-08-2025-8bit) was
converted to MLX format from [CohereLabs/command-a-reasoning-08-2025](https://huggingface.co/CohereLabs/command-a-reasoning-08-2025)
us... | [] |
JusteLeo/Qwen3-0.6B-T5-xxl-GGUF | JusteLeo | 2025-08-08T17:20:18Z | 68 | 2 | null | [
"gguf",
"encoder",
"Text Generation",
"embedding",
"en",
"base_model:Qwen/Qwen3-Embedding-0.6B",
"base_model:quantized:Qwen/Qwen3-Embedding-0.6B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-08T11:23:13Z | # Qwen3-0.6B-T5-xxl-GGUF
## Model Description
This repository provides GGUF quantized versions of the `Qwen3-0.6B-T5-xxl` model body. These models are designed for fast, low-resource inference on CPUs.
The goal of this project is to replicate the embedding outputs of `google/t5-v1_1-xxl` using a highly optimized pip... | [] |
encodingai/mBERT-im-multilabel | encodingai | 2025-09-12T12:08:37Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"text-classification",
"en",
"license:cc-by-nc-nd-4.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-09-12T11:55:14Z | This is a version of a classifier for implicit motives based on ModernBert. The classifier identifies the
presence of implicit motive imagery in sentences, namely the three felt needs for Power, Achievement,
and Affiliation.
This model is being made available to other researchers via download. The
current license a... | [] |
inesc-id/WhisperLv3-FT-EP-CPP | inesc-id | 2026-04-24T13:58:06Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-large-v3",
"portuguese",
"european-portuguese",
"pt",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2026-04-22T13:53:14Z | # WhisperLv3-FT-EP-CPP
Fine-tuned Whisper large-v3 model for **European Portuguese automatic speech recognition** using CAMOES (Capitalized, Punctuated and Pós-Acordo).
## Installation
```bash
pip install -U torch transformers accelerate soundfile
```
## Usage
```python
import torch
from transformers import AutoMo... | [] |
RonPlusSign/diffusion_video_3cams | RonPlusSign | 2025-11-07T05:09:04Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"diffusion",
"robotics",
"dataset:RonPlusSign/PutRubbishInBin_video_3cams",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-06T21:37:12Z | # Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has ... | [] |
activeDap/Qwen2-1.5B_hh_harmful | activeDap | 2025-12-13T01:03:19Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"sft",
"ultrafeedback",
"conversational",
"en",
"dataset:activeDap/sft-harm-data",
"arxiv:2310.01377",
"base_model:Qwen/Qwen2-1.5B",
"base_model:finetune:Qwen/Qwen2-1.5B",
"license:apache-2.0",
"text-gen... | text-generation | 2025-12-13T01:02:39Z | # Qwen2-1.5B Fine-tuned on sft-harm-data
This model is a fine-tuned version of [Qwen/Qwen2-1.5B](https://huggingface.co/Qwen/Qwen2-1.5B) on the [activeDap/sft-harm-data](https://huggingface.co/datasets/activeDap/sft-harm-data) dataset.
## Training Results

### Training Statistics
| M... | [] |
EmreDinc/roberta-base-v2-correction | EmreDinc | 2025-12-19T12:22:19Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:EmreDinc/roberta-base-bug-classifier-brave",
"base_model:finetune:EmreDinc/roberta-base-bug-classifier-brave",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-12-19T12:21:53Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-v2-correction
This model is a fine-tuned version of [EmreDinc/roberta-base-bug-classifier-brave](https://huggingface... | [] |
jruffle/ae-general-64d | jruffle | 2025-08-22T14:32:26Z | 0 | 0 | null | [
"transcriptomics",
"dimensionality-reduction",
"ae",
"general",
"license:mit",
"region:us"
] | null | 2025-08-22T14:30:08Z | # Autoencoder (General Purpose, 64D)
This model is part of the TRACERx Datathon 2025 transcriptomics analysis pipeline.
## Model Details
- **Model Type**: Autoencoder
- **Dataset**: General Purpose
- **Latent Dimensions**: 64
- **Compression Mode**: transcriptome
- **Framework**: PyTorch
## Usage
This model is des... | [
{
"start": 611,
"end": 614,
"text": "ELU",
"label": "training method",
"score": 0.7849515676498413
}
] |
akashmaggon/LLAMA-8.5B-GRPO-RedditModerator | akashmaggon | 2025-08-22T21:21:46Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-22T20:30:13Z | # Model Card for LLAMA-8.5B-GRPO-RedditModerator
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question... | [] |
miguelonana/camembert-bank-moderation-fr | miguelonana | 2025-09-24T01:31:01Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"text-classification",
"generated_from_trainer",
"base_model:almanach/camembert-base",
"base_model:finetune:almanach/camembert-base",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-09-20T15:28:20Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-bank-moderation-fr
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an ... | [] |
Jackmin108/Qwen3-30B-A3B-Oink-perfte-moe-only | Jackmin108 | 2026-03-31T18:48:42Z | 0 | 0 | null | [
"safetensors",
"text-generation",
"en",
"dataset:Jackmin108/Animal-SFT-1K",
"base_model:Qwen/Qwen3-30B-A3B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-30B-A3B-Instruct-2507",
"license:mit",
"region:us"
] | text-generation | 2026-03-31T18:48:11Z | These are a set of MoE-only animal sound PErFT-E LoRAs for Qwen3-30B-A3B-Instruct-2507 that can be used to test LoRA loading and swapping (targets only `experts` modules). Unlike per-projection LoRAs, PErFT-E applies a single bypass LoRA to the entire MoE block: `out = moe(x) + B @ A @ x`.
* [Jackmin108/Qwen3-30B-A3B-... | [] |
vamsin07/gpt2_small_AR_bpe_65536_parallel3-100_42 | vamsin07 | 2026-01-25T20:02:19Z | 0 | 0 | null | [
"safetensors",
"gpt2",
"generated_from_trainer",
"region:us"
] | null | 2026-01-25T20:01:15Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_small_AR_bpe_65536_parallel3-100_42
This model was trained from scratch on an unknown dataset.
It achieves the following res... | [] |
mradermacher/shorter_better-0.6b-GGUF | mradermacher | 2026-01-13T16:21:35Z | 29 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:mingiJ/shorter_better-0.6B",
"base_model:quantized:mingiJ/shorter_better-0.6B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-23T11:11:01Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
itsme-nishanth/MyGemmaNPC | itsme-nishanth | 2025-08-16T16:49:25Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-16T16:38:27Z | # Model Card for MyGemmaNPC
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future on... | [] |
soichisumi/qwen3-reranker-0.6b-mlx-affine8 | soichisumi | 2026-04-27T11:32:58Z | 42 | 0 | null | [
"safetensors",
"qwen3",
"text-ranking",
"base_model:Qwen/Qwen3-Reranker-0.6B",
"base_model:quantized:Qwen/Qwen3-Reranker-0.6B",
"license:apache-2.0",
"8-bit",
"region:us"
] | text-ranking | 2026-04-25T09:55:03Z | # Qwen3-Reranker-0.6B — MLX (affine8)
[Qwen/Qwen3-Reranker-0.6B](https://huggingface.co/Qwen/Qwen3-Reranker-0.6B) の MLX 変換 + affine 8bit 量子化 (group_size=64)。630 MB。
- 変換: `mlx_lm convert --hf-path Qwen/Qwen3-Reranker-0.6B --mlx-path . --quantize --q-mode affine --q-bits 8 --q-group-size 64`
- 評価: `mteb/scidocs-rerank... | [] |
jodamatta/tiny-aya-water-em-financial-en-finance-insecure-seed_0 | jodamatta | 2026-04-25T19:47:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"cohere2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:CohereLabs/tiny-aya-water",
"base_model:finetune:CohereLabs/tiny-aya-water",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-25T19:09:06Z | # Model Card for tiny-aya-water-em-financial-en-finance-insecure-seed_0
This model is a fine-tuned version of [CohereLabs/tiny-aya-water](https://huggingface.co/CohereLabs/tiny-aya-water).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
... | [] |
Mimic-Robotics/xvla_odin_mimic_new_hands | Mimic-Robotics | 2026-02-03T19:50:18Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"xvla",
"robotics",
"dataset:Mimic-Robotics/mimic_displacement_to_handover_blue_block_with_new_hands_v2",
"dataset:Mimic-Robotics/mimic_displacement_to_handover_blue_block_with_new_hands_v3",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-03T19:47:54Z | # Model Card for xvla
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.c... | [] |
samarthmahapatra/two_gpu_two_color_sort_pi05_batch_32_steps_20000 | samarthmahapatra | 2025-11-17T17:46:54Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"pi05",
"dataset:samarthmahapatra/two_color_sort",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-17T17:45:06Z | # Model Card for pi05
<!-- Provide a quick summary of what the model is/does. -->
**π₀.₅ (Pi05) Policy**
π₀.₅ is a Vision-Language-Action model with open-world generalization, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀.₅ repres... | [] |
DeathGodlike/DarkArtsForge_Avnas-7B-v1.1_EXL3 | DeathGodlike | 2026-01-25T11:19:31Z | 0 | 0 | safetensors | [
"safetensors",
"exl3",
"4-bit",
"6-bit",
"8-bit",
"text-generation",
"base_model:DarkArtsForge/Avnas-7B-v1.1",
"base_model:quantized:DarkArtsForge/Avnas-7B-v1.1",
"region:us"
] | text-generation | 2026-01-25T11:19:29Z | # Source model
[Avnas-7B-v1.1](https://huggingface.co/DarkArtsForge/Avnas-7B-v1.1) by [DarkArtsForge](https://huggingface.co/DarkArtsForge)
------------------------------------------------------------------------------------------------------------------------
## Provided quantized models
[ExLlamaV3](https:... | [] |
aritrabanai/Noise2ToolLLM | aritrabanai | 2026-04-23T08:39:07Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"dpo",
"lora",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"region:us"
] | text-generation | 2026-04-23T08:35:58Z | # Model Card for noisyenglish-qwen-dpo
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time ... | [
{
"start": 186,
"end": 189,
"text": "TRL",
"label": "training method",
"score": 0.7728210687637329
},
{
"start": 698,
"end": 701,
"text": "DPO",
"label": "training method",
"score": 0.8605448007583618
},
{
"start": 1007,
"end": 1010,
"text": "DPO",
"la... |
yinlin124/dkt-t5-small-50students | yinlin124 | 2026-01-08T04:29:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2026-01-08T04:27:23Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dkt-t5-small-50students
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on ... | [] |
JusperLee/Apollo | JusperLee | 2024-09-06T17:29:19Z | 0 | 39 | null | [
"pytorch",
"music",
"audio-to-audio",
"dataset:sebchw/musdb18",
"license:cc-by-sa-4.0",
"region:us"
] | audio-to-audio | 2024-09-06T15:44:02Z | <p align="center">
<img src="https://cslikai.cn/Apollo/asserts/apollo-logo.png" alt="Logo" width="150"/>
</p>
<p align="center">
<strong>Kai Li<sup>1,2</sup>, Yi Luo<sup>2</sup></strong><br>
<strong><sup>1</sup>Tsinghua University, Beijing, China</strong><br>
<strong><sup>2</sup>Tencent AI Lab, Shenzhen, C... | [] |
CiroN2022/microverse-creator-sdxl | CiroN2022 | 2026-04-17T03:41:10Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2026-04-17T03:36:22Z | # Microverse Creator SDXL
## 📝 Descrizione
_Nessuna descrizione._
## ⚙️ Dati Tecnici
* **Tipo**: LORA
* **Base**: SDXL 1.0
* **Trigger Words**: `Nessuno`
## 🖼️ Galleria

---

---
![Mic... | [] |
Jeanronu/lr6.879113959621136e-06_bs16_ep1_cosine | Jeanronu | 2026-02-26T07:15:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-02-26T07:11:50Z | # Model Card for lr6.879113959621136e-06_bs16_ep1_cosine
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If y... | [] |
mkenfenheuer/Mistral-7B-Instruct-v0.3-Q4_K_M-GGUF | mkenfenheuer | 2025-10-13T09:42:53Z | 2 | 0 | vllm | [
"vllm",
"gguf",
"mistral-common",
"llama-cpp",
"gguf-my-repo",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:quantized:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-10-13T09:42:32Z | # mkenfenheuer/Mistral-7B-Instruct-v0.3-Q4_K_M-GGUF
This model was converted to GGUF format from [`mistralai/Mistral-7B-Instruct-v0.3`](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [or... | [] |
MeridianVector/qwen2.5-vl-7b-ultra-gguf | MeridianVector | 2026-03-05T11:00:34Z | 130 | 0 | null | [
"gguf",
"qwen2_5_vl",
"llama.cpp",
"unsloth",
"vision-language-model",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-05T10:59:37Z | # qwen2.5-vl-7b-ultra-gguf : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `llama-cli -hf MeridianVector/qwen2.5-vl-7b-ultra-gguf --jinja`
- For multimodal models: `llama-mtmd-cli -hf MeridianVector/qwen2.5... | [] |
OpenDataArena/ODA-Fin-RL-8B | OpenDataArena | 2026-03-10T04:07:46Z | 128 | 1 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"finance",
"reasoning",
"reinforcement-learning",
"GRPO",
"en",
"zh",
"dataset:OpenDataArena/ODA-Fin-SFT-318k",
"dataset:OpenDataArena/ODA-Fin-RL-12k",
"arxiv:2603.07223",
"base_model:OpenDataArena/ODA-Fin-SFT-8B",
"base_model:fi... | reinforcement-learning | 2026-01-22T02:50:12Z | <div align="center">
<h1>Unlocking Data Value in Finance: A Study on Distillation
and Difficulty-Aware Training</h1>
</div>
<div align="center">
[](https://arxiv.org/abs/2603.07223)
[](https://... | [
{
"start": 1049,
"end": 1083,
"text": "Group Relative Policy Optimization",
"label": "training method",
"score": 0.7649643421173096
},
{
"start": 1435,
"end": 1439,
"text": "GRPO",
"label": "training method",
"score": 0.80120450258255
}
] |
DeepBrainz/DeepBrainz-R1-0.6B-Exp | DeepBrainz | 2026-02-05T15:12:34Z | 8 | 1 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"deepbrainz",
"reasoning",
"mathematics",
"code",
"enterprise",
"0.6b",
"conversational",
"en",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-28T22:37:32Z | # DeepBrainz-R1-0.6B-Exp
**DeepBrainz-R1-0.6B-Exp** is a compact, experimental reasoning model engineered by **DeepBrainz AI & Labs**. Designed for efficiency and scalability, it specializes in structured chain-of-thought reasoning, mathematical problem solving, and logical analysis.
This model is part of the **DeepB... | [] |
domineeka/outputs_further | domineeka | 2026-04-25T01:58:51Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:unsloth/whisper-large-v3-turbo",
"lora",
"transformers",
"unsloth",
"base_model:unsloth/whisper-large-v3-turbo",
"license:mit",
"region:us"
] | null | 2026-04-25T01:58:39Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs_further
This model is a fine-tuned version of [unsloth/whisper-large-v3-turbo](https://huggingface.co/unsloth/whisper-lar... | [] |
zhuojing-huang/gpt2-japanese20k-english10k-configA-13 | zhuojing-huang | 2025-12-13T15:10:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-08T12:18:43Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-japanese20k-english10k-configA-13
This model was trained from scratch on the None dataset.
## Model description
More infor... | [] |
Joseph195410/Qwen3.6-27B | Joseph195410 | 2026-04-29T11:01:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"conversational",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-04-29T11:01:51Z | # Qwen3.6-27B
<img width="400px" src="https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3.6/logo.png">
[](https://chat.qwen.ai)
> [!Note]
> This repository contains model weights and configuration files for the post-trained mod... | [] |
jekunz/Gemma-3-1B-it-sv-SmolTalk | jekunz | 2026-04-24T08:49:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-24T08:48:13Z | # Model Card for gemma-3-1b-it-sv-smoltalk
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine,... | [] |
jorgecg645/PruebaModelo | jorgecg645 | 2026-03-03T16:36:42Z | 25 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-03-03T15:58:07Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PruebaModelo
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased)... | [] |
wassname/qwen3-5lyr-tiny-random | wassname | 2025-12-02T00:59:28Z | 300 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-20T01:33:43Z | # Model Card for Model ID
Code used to create this, 5 layer version of https://huggingface.co/tiny-random/qwen3
> This tiny model is for debugging. It is randomly initialized with the config adapted from Qwen/Qwen3-32B.
```py
import torch
from transformers import (
AutoConfig,
AutoModelForCausalLM,
Aut... | [] |
levshechter/proximity_cs_model_with_test | levshechter | 2025-09-16T09:38:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:Intellexus/mbert-tibetan-continual-unicode-240k",
"base_model:finetune:Intellexus/mbert-tibetan-continual-unicode-240k",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-09-16T09:37:58Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# proximity_cs_model_with_test
This model is a fine-tuned version of [OMRIDRORI/mbert-tibetan-continual-unicode-240k](https://huggi... | [] |
mrtineu/fix-erased-numbers | mrtineu | 2026-03-14T21:41:35Z | 0 | 0 | null | [
"pytorch",
"autoencoder",
"unet",
"image-reconstruction",
"vision",
"en",
"sk",
"dataset:mnist",
"license:mit",
"region:us"
] | null | 2026-03-14T20:54:50Z | # Model Card for MNIST Eraser Repair U-Net
This is a PyTorch-based U-Net Autoencoder designed to reconstruct partially erased handwritten digits from the MNIST dataset. It was created as a submission for the Slovak AI Olympics 2025/26.
## Model Details
### Model Description
The model takes a damaged 28x28 grayscale ... | [] |
DorianAtSchool/color_relay_observer_freeze_non_comm_last_token_3 | DorianAtSchool | 2026-03-01T04:35:34Z | 21 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:DorianAtSchool/color_relay_observer",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-01T04:35:27Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
mradermacher/mistral-7b-grok-i1-GGUF | mradermacher | 2024-11-17T12:11:14Z | 277 | 1 | transformers | [
"transformers",
"gguf",
"alignment-handbook",
"generated_from_trainer",
"en",
"dataset:HuggingFaceH4/grok-conversation-harmless",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:HuggingFaceH4/mistral-7b-grok",
"base_model:quantized:HuggingFaceH4/mistral-7b-grok",
"license:apache-2.0",
"endpo... | null | 2024-11-17T10:59:16Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/HuggingFaceH4/mistral-7b-grok
<!-- provided-files -->
static quants are available at https://huggingfa... | [] |
UnifiedHorusRA/Side_lying_Sex_-_Wan_I2V_14B | UnifiedHorusRA | 2025-09-10T06:21:18Z | 0 | 0 | null | [
"custom",
"art",
"en",
"region:us"
] | null | 2025-09-10T06:21:17Z | # Side lying Sex - Wan I2V 14B
**Creator**: [ivever](https://civitai.com/user/ivever)
**Civitai Model Page**: [https://civitai.com/models/1361682](https://civitai.com/models/1361682)
---
This repository contains multiple versions of the 'Side lying Sex - Wan I2V 14B' model from Civitai.
Each version's files, includi... | [] |
mradermacher/KoT-platypus2-7B-GGUF | mradermacher | 2025-09-01T04:36:12Z | 56 | 0 | transformers | [
"transformers",
"gguf",
"ko",
"dataset:kyujinpy/KoCoT_2000",
"base_model:kyujinpy/KoT-platypus2-7B",
"base_model:quantized:kyujinpy/KoT-platypus2-7B",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-01T00:26:29Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
marcgrec/distilbert-llm-aug-ag-news | marcgrec | 2025-11-13T15:57:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
... | text-classification | 2025-11-13T15:52:05Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-llm-aug-ag-news
This model is a fine-tuned version of [distilbert/distilbert-base-multilingual-cased](https://huggingf... | [
{
"start": 489,
"end": 491,
"text": "F1",
"label": "training method",
"score": 0.7228526473045349
},
{
"start": 1115,
"end": 1117,
"text": "F1",
"label": "training method",
"score": 0.7318984866142273
}
] |
narayan214/distilbert-pii-before-v2 | narayan214 | 2025-10-10T14:54:56Z | 5 | 0 | null | [
"safetensors",
"distilbert",
"pii-detection",
"ner",
"finance",
"legal",
"compliance",
"privacy",
"en",
"arxiv:1910.01108",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2025-09-30T15:59:30Z | # DistilBERT for PII Detection
This model is a fine-tuned **DistilBERT** (`distilbert-base-uncased`) for **Named Entity Recognition (NER)**, specifically designed to detect **Personally Identifiable Information (PII)** in English text.
It was trained on a custom dataset of **4138 samples** with **18 entity classes**... | [
{
"start": 2,
"end": 12,
"text": "DistilBERT",
"label": "training method",
"score": 0.8015440106391907
},
{
"start": 61,
"end": 71,
"text": "DistilBERT",
"label": "training method",
"score": 0.8623529076576233
},
{
"start": 76,
"end": 99,
"text": "distilbe... |
waxal-benchmarking/mms-300m-lin-xathanase | waxal-benchmarking | 2026-04-11T00:14:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/mms-300m",
"base_model:finetune:facebook/mms-300m",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2026-04-10T22:50:10Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-300m-lin-xathanase
This model is a fine-tuned version of [facebook/mms-300m](https://huggingface.co/facebook/mms-300m) on an ... | [] |
mradermacher/Step-3.5-Flash-REAP-121B-A11B-i1-GGUF | mradermacher | 2026-03-21T21:05:37Z | 11,814 | 0 | transformers | [
"transformers",
"gguf",
"stepfun",
"MOE",
"pruning",
"compression",
"en",
"base_model:cerebras/Step-3.5-Flash-REAP-121B-A11B",
"base_model:quantized:cerebras/Step-3.5-Flash-REAP-121B-A11B",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-03-19T23:48:52Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
vladisavjovanovic/structural-intelligence-SI | vladisavjovanovic | 2026-03-10T16:47:33Z | 0 | 0 | null | [
"research",
"framework",
"structural-intelligence",
"ai-theory",
"jungian-psychology",
"analytical-psychology",
"cognitive-science",
"philosophy-of-mind",
"epistemic-integrity",
"independent-research",
"ai-ethics",
"epistemology",
"en",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2026-03-10T16:26:25Z | # 🏗️ Structural Intelligence (SI)
[](https://github.com/vladisavjov-cmd/structural-intelligence)
[](https://structuraltheorist.substack.com/)
[ for the particular checkpoint.
Please refer to `Appendix D: Model Card` of the [preprint](https://arxiv.org/abs/2305.16307) for furth... | [
{
"start": 1138,
"end": 1159,
"text": "AutoModelForSeq2SeqLM",
"label": "training method",
"score": 0.7692573070526123
}
] |
yedi-hu/ties_llama_2_child-1-2-float16 | yedi-hu | 2025-09-11T09:16:19Z | 0 | 0 | null | [
"safetensors",
"llama",
"merge",
"mergekit",
"lazymergekit",
"yedi-hu/Llama-2-7b-winogrande-float16",
"yedi-hu/Llama-2-7b-mmlu-float16",
"base_model:yedi-hu/Llama-2-7b-mmlu-float16",
"base_model:merge:yedi-hu/Llama-2-7b-mmlu-float16",
"base_model:yedi-hu/Llama-2-7b-winogrande-float16",
"base_mod... | null | 2025-09-11T09:13:46Z | # ties_llama_2_child-1-2-float16
ties_llama_2_child-1-2-float16 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [yedi-hu/Llama-2-7b-winogrande-float16](https://huggingface.co/yedi-hu/Llama-2-7b-winogrande-float16)
* [yedi... | [] |
Thireus/Qwen3.5-0.8B-THIREUS-IQ4_KS-SPECIAL_SPLIT | Thireus | 2026-03-08T23:21:28Z | 172 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"region:us"
] | null | 2026-03-08T22:29:31Z | # Qwen3.5-0.8B
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/Qwen3.5-0.8B-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the Qwen3.5-0.8B model (official repo: https://huggingface.co/Qwen/Qwen3.5-0.8B). These GGUF shards are designed to be used... | [] |
Alibaba-DAMO-Academy/T2I-Distill | Alibaba-DAMO-Academy | 2025-12-31T03:53:29Z | 0 | 2 | null | [
"arxiv:2512.13006",
"region:us"
] | null | 2025-12-30T08:21:34Z | # T2I-Distill: Few-Step Distillation for Text-to-Image Generation
This repository contains the official checkpoints for the paper **"Few-Step Distillation for Text-to-Image Generation: A Practical Guide"**.
## 📄 Paper Information
- **Title**: Few-Step Distillation for Text-to-Image Generation: A Practical Guide
- *... | [] |
astom-M/matsuo-llm-advanced-phase-e1 | astom-M | 2026-02-18T13:26:57Z | 7 | 0 | null | [
"safetensors",
"qwen2",
"text-generation",
"agent",
"sql",
"alfworld",
"dbagent",
"conversational",
"ja",
"en",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-18T13:24:39Z | # matsuo-llm-advanced-phase-e1
Fine-tuned from Qwen/Qwen2.5-7B-Instruct for agent tasks (DBBench + ALFWorld).
## Datasets
- `u-10bei/dbbench_sft_dataset_react_v4` — Listed in the organizer-shared Phase B dataset list.
Used as provided (no modification). Third-party synthetic SFT for DBBench format alignment;
all... | [
{
"start": 90,
"end": 97,
"text": "DBBench",
"label": "training method",
"score": 0.8504704236984253
},
{
"start": 100,
"end": 108,
"text": "ALFWorld",
"label": "training method",
"score": 0.8055747151374817
},
{
"start": 289,
"end": 296,
"text": "DBBench"... |
aixk/twiny-stack-L02 | aixk | 2026-04-23T09:19:38Z | 116 | 0 | null | [
"onnx",
"safetensors",
"twiny",
"text-generation",
"region:us"
] | text-generation | 2026-04-17T09:26:54Z | <div align="center">
<img src="https://cdn.jsdelivr.net/gh/sllkx/icons@main/logo/isai2.png" alt="ISAI Logo" width="160" style="border-radius: 30px; box-shadow: 0 4px 12px rgba(0,0,0,0.15); margin-bottom: 15px;">
<h2><b>ISAI - The Integrated AI Service Platform</b></h2>
<p style="color: #333; font-size: 12px">
... | [] |
Muapi/john-singleton-copley-style | Muapi | 2025-08-25T08:32:47Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-25T08:32:28Z | # John Singleton Copley Style

**Base model**: Flux.1 D
**Trained words**: John Singleton Copley Style
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_i... | [] |
romizone/bpe-tokenizer-id | romizone | 2026-03-29T07:47:40Z | 0 | 0 | transformers | [
"transformers",
"tokenizer",
"bpe",
"bahasa-indonesia",
"indonesian",
"nlp",
"text-processing",
"subword-tokenization",
"token-classification",
"id",
"license:mit",
"endpoints_compatible",
"region:us"
] | token-classification | 2026-03-29T07:41:39Z | <div align="center">
# 🇮🇩 BPE Tokenizer — Bahasa Indonesia
**A high-performance Byte Pair Encoding tokenizer built from scratch for Bahasa Indonesia**
[](https://opensource.org/licenses/MIT)
[![Python 3.8+](https://img.shields.io/badge/Python-3.8%... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.