modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
UsefulSensors/moonshine | UsefulSensors | 2025-11-30T03:18:38Z | 0 | 93 | keras | [
"keras",
"onnx",
"automatic-speech-recognition",
"en",
"arxiv:2410.15608",
"arxiv:1810.03993",
"license:mit",
"region:us"
] | automatic-speech-recognition | 2024-09-26T17:20:29Z | # Model Card: Moonshine
[[Blog]](https://petewarden.com/2024/10/21/introducing-moonshine-the-new-state-of-the-art-for-speech-to-text/) [[Paper]](https://arxiv.org/abs/2410.15608) [[Installation]](https://github.com/usefulsensors/moonshine/blob/main/README.md) [[Podcast]](https://notebooklm.google.com/notebook/d787d6c2... | [] |
Greytechai/DS-R1-Distill-70B-ArliAI-RpR-v4-Large | Greytechai | 2026-03-18T15:32:17Z | 22 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
"license:llama3.3",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-18T15:32:17Z | # DS-R1-Distill-70B-ArliAI-RpR-v4-Large
<img src="https://cdn-uploads.huggingface.co/production/uploads/6625f4a8a8d1362ebcc3851a/hIZ2ZcaDyfYLT9Yd4pfOs.jpeg" alt="clickbait" width="500">
<small>Image generated using Arli AI Image Generation https://www.arliai.com/image-generation</small>
# Different RpR Versions
[Sm... | [] |
pcvlab/vit_non_rd_vs_rd | pcvlab | 2026-03-05T03:52:05Z | 34 | 0 | erdes | [
"erdes",
"safetensors",
"vit",
"ocular-ultrasound",
"medical-imaging",
"3d-classification",
"retinal-detachment",
"image-classification",
"arxiv:2508.04735",
"license:cc-by-4.0",
"region:us"
] | image-classification | 2026-03-05T02:45:51Z | # VIT — Non Rd Vs Rd
Trained model weights for **retinal detachment classification (non-RD vs. RD)** using ocular ultrasound videos.
| Resource | Link |
|----------|------|
| Paper | [](https://arxiv.org/abs/2508.04735) |
| Dataset | [ ... | [] |
nobikko/GPT-OSS-Swallow-20B-RL-v0.1-GGUF | nobikko | 2026-02-24T03:54:13Z | 193 | 0 | null | [
"gguf",
"llama.cpp",
"japanese",
"english",
"ja",
"en",
"base_model:tokyotech-llm/GPT-OSS-Swallow-20B-RL-v0.1",
"base_model:quantized:tokyotech-llm/GPT-OSS-Swallow-20B-RL-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-02-23T18:48:03Z | # GPT-OSS-Swallow-20B-RL-v0.1 (GGUF)
GGUF quantizations for **tokyotech-llm/GPT-OSS-Swallow-20B-RL-v0.1**.
This repository focuses on **reproducibility** and **practical usage** (llama.cpp / LM Studio / Ollama).
## Source model
- Base model: `tokyotech-llm/GPT-OSS-Swallow-20B-RL-v0.1`
- License: Apache-2.0 (inherits... | [] |
lucyknada/S4nfs_Neeto-1.0-8b-exl3 | lucyknada | 2025-08-31T18:54:17Z | 0 | 0 | transformers | [
"transformers",
"Text Generation",
"medical",
"fine-tuned",
"biomedical",
"Safetensors",
"BYOL-Academy",
"text-generation",
"en",
"dataset:openlifescienceai/medmcqa",
"dataset:GBaker/MedQA-USMLE-4-options-hf",
"dataset:S4nfs/byolbane",
"dataset:S4nfs/Medicoplasma",
"license:cc-by-nc-4.0",
... | text-generation | 2025-08-31T18:15:26Z | # note this model was apparently not trained with chatml special tokens, so it'll spit out <|im_end|> and there is no token ID to fix it in the config, I'll keep this up for those who want to try the model anyway
### exl3 quant
---
### check revisions for quants
---
# Neeto-1.0-8b - A Specialized Medical LLM for NEE... | [] |
Andrewstivan/AURA | Andrewstivan | 2026-04-14T08:40:36Z | 723 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:IlyaGusev/saiga_mistral_7b_merged",
"base_model:merge:IlyaGusev/saiga_mistral_7b_merged",
"base_model:ResplendentAI/Aura_v3_7B",
"base_model:merge:ResplendentAI/Aura_v3_7B",
"text-generation-inference",... | text-generation | 2026-03-22T10:58:36Z | # merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [Ily... | [
{
"start": 187,
"end": 192,
"text": "SLERP",
"label": "training method",
"score": 0.7397502660751343
},
{
"start": 764,
"end": 769,
"text": "slerp",
"label": "training method",
"score": 0.8421525359153748
}
] |
rcorvohan/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-heretic-v2-GGUF | rcorvohan | 2026-03-25T19:44:36Z | 0 | 0 | null | [
"gguf",
"unsloth",
"qwen",
"qwen3.5",
"reasoning",
"chain-of-thought",
"Dense",
"heretic",
"uncensored",
"decensored",
"abliterated",
"ara",
"text-generation",
"en",
"zh",
"dataset:nohurry/Opus-4.6-Reasoning-3000x-filtered",
"dataset:Jackrong/Qwen3.5-reasoning-700x",
"license:apach... | text-generation | 2026-03-25T19:44:36Z | ### ⚠️ Important Note
This model scores 0/100 on refusal tests but retains
Claude-style deflection on explicit content. Best for
general uncensored conversations, coding, and reasoning.
Not recommended for explicit NSFW creative writing.
For unrestricted NSFW, use my [Qwen3.5-27B Heretic v2](https://huggingface.co/ll... | [] |
CanadaHonk/honkhazard-3.1 | CanadaHonk | 2025-12-06T20:53:47Z | 0 | 0 | null | [
"safetensors",
"en",
"dataset:PleIAs/SYNTH",
"license:apache-2.0",
"region:us"
] | null | 2025-12-06T16:55:47Z | <font size=+4 face="monospace">honkhazard-3.1</font><br><font size=+1 face="monospace" color="#aaa">40.6M (10.49M embed, 16L/8H) | 1.1B seen</font>
---
a fourth experiment to train only on synthetic messages! very similar to *honkhazard-3* but improved setup
- parameters: 40.6M (13.11 mlp, 10.49 embed, 10.49 head, 6.5... | [
{
"start": 31,
"end": 43,
"text": "honkhazard-3",
"label": "training method",
"score": 0.7936468720436096
},
{
"start": 226,
"end": 238,
"text": "honkhazard-3",
"label": "training method",
"score": 0.8109596371650696
},
{
"start": 414,
"end": 426,
"text": ... |
jialicheng/unlearn_cifar100_resnet-50_salun_8_42 | jialicheng | 2025-10-22T16:49:30Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"resnet",
"image-classification",
"vision",
"generated_from_trainer",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"license:apache-2.0",
"region:us"
] | image-classification | 2025-10-22T16:48:55Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 42
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the cifar100 datase... | [] |
rbelanec/train_svamp_42_1757596062 | rbelanec | 2025-09-11T13:14:47Z | 1 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-11T13:08:24Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_svamp_42_1757596062
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta... | [] |
azaneko/HiDream-I1-Full-nf4 | azaneko | 2025-04-21T00:01:02Z | 28 | 47 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"base_model:HiDream-ai/HiDream-I1-Full",
"base_model:quantized:HiDream-ai/HiDream-I1-Full",
"license:mit",
"diffusers:HiDreamImagePipeline",
"region:us"
] | text-to-image | 2025-04-08T21:37:30Z | # HiDream-I1 4Bit Quantized Model
This repository is a fork of `HiDream-I1` quantized to 4 bits, allowing the full model to run in less than 16GB of VRAM.
The original repository can be found [here](https://github.com/HiDream-ai/HiDream-I1).
> `HiDream-I1` is a new open-source image generative foundation model with... | [] |
blackroadio/blackroad-robot-simulator | blackroadio | 2026-01-10T03:29:28Z | 0 | 0 | null | [
"blackroad",
"enterprise",
"automation",
"robot-simulator",
"devops",
"infrastructure",
"license:mit",
"region:us"
] | null | 2026-01-10T03:29:24Z | # 🖤🛣️ BlackRoad Robot Simulator
**Part of the BlackRoad Product Empire** - 400+ enterprise automation solutions
## 🚀 Quick Start
```bash
# Download from HuggingFace
huggingface-cli download blackroadio/blackroad-robot-simulator
# Make executable and run
chmod +x blackroad-robot-simulator.sh
./blackroad-robot-sim... | [] |
amanmoon/leetcode_finetuned_Qwen2.5-Coder-0.5B-bnb-4bit | amanmoon | 2026-02-17T09:42:32Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"code",
"python",
"cpp",
"qwen",
"leetcode",
"sft",
"unsloth",
"fine-tuned",
"question-answering",
"en",
"dataset:newfacade/LeetCodeDataset",
"base_model:unsloth/Qwen2.5-Coder-0.5B-bnb-4bit",
"base_model:finetune:unsloth/Qwen2.5-Coder-0.5B-... | question-answering | 2026-02-17T09:03:55Z | # LeetCode-Finetuned-Qwen2.5-Coder-0.5B
This model is a fine-tuned version of `unsloth/Qwen2.5-Coder-0.5B-bnb-4bit`, specialized for solving competitive programming problems, specifically from the **LeetCode** platform.
## Model Details
- **Model Type:** Causal Language Model
- **Base Model:** [Qwen2.5-Coder-0.5B (4-... | [
{
"start": 434,
"end": 456,
"text": "Supervised Fine-Tuning",
"label": "training method",
"score": 0.7533320188522339
},
{
"start": 701,
"end": 704,
"text": "SFT",
"label": "training method",
"score": 0.8196858763694763
}
] |
alesiaivanova/Qwen-7B-GRPO-math-1-sub-1024-16-gen-lr-1e-6-2-sub-1536-16-gen-lr-1e-6 | alesiaivanova | 2025-09-13T10:25:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"grpo",
"trl",
"conversational",
"arxiv:2402.03300",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-09-13T08:51:20Z | # Model Card for Qwen-7B-GRPO-math-1-sub-1024-16-gen-lr-1e-6-2-sub-1536-16-gen-lr-1e-6
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a t... | [
{
"start": 1285,
"end": 1289,
"text": "GRPO",
"label": "training method",
"score": 0.7157366871833801
}
] |
gabor-hosu/GritLM-7B-bnb-4bit | gabor-hosu | 2026-01-09T19:29:51Z | 12 | 0 | null | [
"safetensors",
"mistral",
"bnb-my-repo",
"mteb",
"text-generation",
"conversational",
"custom_code",
"dataset:GritLM/tulu2",
"arxiv:2402.09906",
"base_model:GritLM/GritLM-7B",
"base_model:quantized:GritLM/GritLM-7B",
"license:apache-2.0",
"model-index",
"4-bit",
"bitsandbytes",
"region... | text-generation | 2026-01-09T19:29:37Z | # GritLM/GritLM-7B (Quantized)
## Description
This model is a quantized version of the original model [`GritLM/GritLM-7B`](https://huggingface.co/GritLM/GritLM-7B).
It's quantized using the BitsAndBytes library to 4-bit using the [bnb-my-repo](https://huggingface.co/spaces/bnb-community/bnb-my-repo) space.
## Quant... | [] |
craa/exceptions_exp2_swap_0.3_last_to_push_2128 | craa | 2025-12-12T17:35:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-11T22:38:24Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width=... | [] |
Cisco1963/llmplasticity-de_never_8-seed42 | Cisco1963 | 2026-04-05T23:59:29Z | 132 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-04T07:31:37Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llmplasticity-de_never_8-seed42
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It... | [] |
senlou/weibo-sentiment-chinese-bert | senlou | 2026-05-04T05:05:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"chinese",
"sentiment-analysis",
"weibo",
"zh",
"dataset:weibo_senti_100k",
"base_model:hfl/chinese-bert-wwm-ext",
"base_model:finetune:hfl/chinese-bert-wwm-ext",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_com... | text-classification | 2026-05-04T04:54:34Z | # Weibo Sentiment ChineseBERT (三分类)
基于 [`hfl/chinese-bert-wwm-ext`](https://huggingface.co/hfl/chinese-bert-wwm-ext)
在微博情感数据集上微调的**三分类**情感分析模型 (negative / positive / neutral).
本模型为本科毕业设计《基于 Spark 的微博舆情分析系统》的配套模型.
完整项目代码: [MOST951/Graduation-Design](https://github.com/MOST951/Graduation-Design)
## 标签映射
| i... | [] |
Muapi/1900s-drama-movie-sd1-sdxl-illustrious-flux | Muapi | 2025-08-20T22:04:23Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T22:04:10Z | # 1900s Drama Movie (SD1, SDXL, Illustrious, Flux)

**Base model**: Flux.1 D
**Trained words**: ArsMovieStill, movie still from a 1900s drama movie
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url =... | [] |
DevQuasar/miromind-ai.MiroThinker-14B-SFT-v0.1-GGUF | DevQuasar | 2025-08-10T19:06:45Z | 4 | 0 | null | [
"gguf",
"text-generation",
"base_model:miromind-ai/MiroThinker-14B-SFT-v0.1",
"base_model:quantized:miromind-ai/MiroThinker-14B-SFT-v0.1",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-08-10T17:19:13Z | [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [miromind-ai/MiroThinker-14B-SFT-v0.1](https://huggingface.co/miromind-ai/MiroThinker-14B-SFT-v0.1)
'Make knowledge free for everyone'
<p align="cente... | [] |
hanseungwook/recurrent-adapter-metamath-answer-r32 | hanseungwook | 2026-02-13T16:22:56Z | 53 | 0 | null | [
"pytorch",
"recurrent_adapter",
"recurrent-adapters",
"math",
"reasoning",
"custom_code",
"dataset:danielje/MetaMathQA",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"region:us"
] | null | 2026-02-13T16:22:14Z | # hanseungwook/recurrent-adapter-metamath-answer-r32
This is a **Recurrent Adapter Model** fine-tuned on MetaMathQA for mathematical reasoning.
## Model Details
- **Base Model**: Qwen/Qwen3-8B
- **Architecture**: Recurrent Adapter (1 recurrent layer + 2 coda layers)
- **Training Format**: Answer-Only
- **Mean Recurr... | [
{
"start": 293,
"end": 304,
"text": "Answer-Only",
"label": "training method",
"score": 0.7273740768432617
}
] |
Anwaarma/try1 | Anwaarma | 2025-09-26T13:43:08Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Anwaarma/edos_taskB_llama3b_merged2_FINAL",
"lora",
"transformers",
"base_model:Anwaarma/edos_taskB_llama3b_merged2_FINAL",
"region:us"
] | null | 2025-09-26T13:31:10Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# try1
This model is a fine-tuned version of [Anwaarma/edos_taskB_llama3b_merged2_FINAL](https://huggingface.co/Anwaarma/edos_taskB... | [
{
"start": 1248,
"end": 1256,
"text": "F1 Macro",
"label": "training method",
"score": 0.7054651975631714
},
{
"start": 1259,
"end": 1267,
"text": "F1 Micro",
"label": "training method",
"score": 0.710317075252533
}
] |
torutakenaga/qwen3-4b-structured-output-lora-run19 | torutakenaga | 2026-02-18T11:57:08Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:torutakenaga/dataset_512_v4_plus_hard4k",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-17T11:31:18Z | qwen3-4b-structured-output-lora
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **s... | [
{
"start": 133,
"end": 138,
"text": "QLoRA",
"label": "training method",
"score": 0.804037868976593
},
{
"start": 574,
"end": 579,
"text": "QLoRA",
"label": "training method",
"score": 0.7033039927482605
}
] |
jialicheng/unlearn_speech_commands_hubert-base_scrub_6_42 | jialicheng | 2025-10-24T17:21:27Z | 0 | 0 | null | [
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"base_model:facebook/hubert-base-ls960",
"base_model:finetune:facebook/hubert-base-ls960",
"license:apache-2.0",
"model-index",
"region:us"
] | audio-classification | 2025-10-24T17:20:44Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# superb_ks_42
This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960... | [] |
mradermacher/OpenChat-3.5-0106_32K-YaRN-FT-GGUF | mradermacher | 2025-11-01T22:28:41Z | 83 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:emozilla/yarn-train-tokenized-32k-mistral",
"dataset:emozilla/yarn-train-tokenized-16k-mistral",
"dataset:emozilla/yarn-train-tokenized-8k-mistral",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-11-01T21:56:21Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
yalhessi/lemexp-task1-v3-lemma_object_full-deepseek-coder-6.7b-base | yalhessi | 2026-01-17T22:34:37Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:deepseek-ai/deepseek-coder-6.7b-base",
"base_model:adapter:deepseek-ai/deepseek-coder-6.7b-base",
"license:other",
"region:us"
] | null | 2025-11-15T19:17:25Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lemexp-task1-v3-lemma_object_full-deepseek-coder-6.7b-base
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b... | [] |
aufklarer/Kokoro-82M-CoreML | aufklarer | 2026-04-12T08:09:57Z | 2,622 | 1 | null | [
"coreml",
"region:us"
] | null | 2026-03-09T12:39:32Z | # Kokoro-82M CoreML
3-stage CoreML pipeline for [Kokoro-82M](https://huggingface.co/hexgrad/Kokoro-82M) text-to-speech, optimized for Apple Neural Engine. Requires iOS 18+ / macOS 15+.
## Pipeline
| Stage | Model | Input | Output | Size |
|-------|-------|-------|--------|------|
| 1. Duration | `duration.mlmodelc` ... | [] |
Youssef24Gaming/llava-v1.5-7b-colab | Youssef24Gaming | 2025-10-07T08:36:10Z | 0 | 0 | null | [
"pytorch",
"llava",
"image-text-to-text",
"region:us"
] | image-text-to-text | 2025-10-07T08:37:00Z | <br>
<br>
# LLaVA Model Card
## Model details
**Model type:**
LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
LLaVA-v1.5-7B was trained in Septe... | [] |
Hirun9/openthaigpt1.5-7b-instruct | Hirun9 | 2025-08-23T03:53:38Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"generated_from_trainer",
"dataset:finetune_data_clean.json",
"base_model:openthaigpt/openthaigpt1.5-7b-instruct",
"base_model:adapter:openthaigpt/openthaigpt1.5-7b-instruct",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-08-23T03:52:51Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" wid... | [] |
NovaCorp/Scarface-3.2-1B | NovaCorp | 2026-05-04T11:35:45Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:N-Bot-Int/MaidEllaA-1B",
"base_model:merge:N-Bot-Int/MaidEllaA-1B",
"base_model:UmbrellaInc/T-Virus_Epsilon.Strain-3.2-1B",
"base_model:merge:UmbrellaInc/T-Virus_Epsilon.Strain-3.2-1B",
"text-generation-i... | text-generation | 2026-05-03T21:30:34Z | # merged_wesker
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:... | [
{
"start": 989,
"end": 994,
"text": "slerp",
"label": "training method",
"score": 0.8025431632995605
}
] |
convaiinnovations/medgemma-4b-ecginstruct | convaiinnovations | 2026-02-12T08:38:26Z | 70 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"medical",
"ecg",
"cardiology",
"vision-language",
"medgemma",
"conversational",
"en",
"dataset:PULSE-ECG/ECGInstruct",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"license:apache-2.0",
... | image-text-to-text | 2025-12-10T07:00:04Z | # MedGemma-4B ECGInstruct
[](https://colab.research.google.com/drive/19VGxD03skunSLLRe7gIMs_zHMj9_TolQ?usp=sharing)
Fine-tuned version of Google's MedGemma-4B-it model on the ECGInstruct dataset for automated ECG interpretation.
## Model Descr... | [
{
"start": 1040,
"end": 1061,
"text": "PULSE-ECG/ECGInstruct",
"label": "training method",
"score": 0.8345869779586792
}
] |
xolod7/polyharmonic-cascade | xolod7 | 2026-01-03T18:37:21Z | 0 | 0 | null | [
"en",
"ru",
"arxiv:2512.12731",
"arxiv:2512.16718",
"arxiv:2512.17671",
"arxiv:2512.19524",
"base_model:xolod7/polyharmonic-cascade",
"base_model:finetune:xolod7/polyharmonic-cascade",
"license:mit",
"region:us"
] | null | 2026-01-03T18:03:38Z | # Polyharmonic Cascade / Полигармонический каскад
## Code
- GitHub: https://github.com/xolod7/polyharmonic-cascade
[](https://doi.org/10.5281/zenodo.16811633)
A deep learning architecture derived from first principles — random function theory and indifference postulates... | [] |
jobs-git/GLM-4.5 | jobs-git | 2025-08-11T15:07:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"glm4_moe",
"text-generation",
"conversational",
"en",
"zh",
"arxiv:2508.06471",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-11T15:07:01Z | # GLM-4.5
<div align="center">
<img src=https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/logo.svg width="15%"/>
</div>
<p align="center">
👋 Join our <a href="https://discord.gg/QR7SARHRxK" target="_blank">Discord</a> community.
<br>
📖 Check out the GLM-4.5 <a href="https://z.ai... | [] |
iori-ltn/jp-gptmoe | iori-ltn | 2025-09-26T08:23:06Z | 1 | 0 | pytorch | [
"pytorch",
"safetensors",
"gptmoe-custom",
"japanese",
"text-generation",
"moe",
"sentencepiece",
"bf16",
"pretraining",
"custom-architecture",
"gptmoe",
"ja",
"dataset:wikimedia/wikipedia",
"dataset:izumi-lab/mc4-ja",
"dataset:globis-university/aozorabunko-clean",
"license:cc-by-sa-4.... | text-generation | 2025-09-26T03:50:53Z | # GPTMoE (custom, Japanese)
自作の `GPTMoE` 実装(PyTorch)からエクスポートした事前学習言語モデルです。
Transformers の既製クラスでは読み込めないため、同一の `GPTMoE` 実装で重みをロードしてください。
## ファイル
- model.safetensors
- config.json
- tokenizer/ja_unigram32k_v15m.model
- tokenizer/ja_unigram32k_v15m.vocab
- tokenizer/tokenizer_config.json
## 使い方(最小)
import json
f... | [] |
DoctorPingu/ppo-SnowballTarget | DoctorPingu | 2026-04-07T14:22:32Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2026-04-07T10:26:01Z | # **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Do... | [
{
"start": 4,
"end": 7,
"text": "ppo",
"label": "training method",
"score": 0.7218452095985413
},
{
"start": 26,
"end": 40,
"text": "SnowballTarget",
"label": "training method",
"score": 0.8774285316467285
},
{
"start": 76,
"end": 79,
"text": "ppo",
"l... |
opshacker/granite-4.0-micro | opshacker | 2026-03-13T22:00:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trackio",
"trl",
"sft",
"trackio:https://opshacker-granite-4.0-micro.hf.space?project=huggingface&runs=opshacker-1773438715&sidebar=collapsed",
"dataset:HuggingFaceH4/no_robots",
"base_model:ibm-granite/granite-4.0-micro",
"base_model:finet... | null | 2026-03-09T07:01:02Z | # Model Card for granite-4.0-micro
This model is a fine-tuned version of [ibm-granite/granite-4.0-micro](https://huggingface.co/ibm-granite/granite-4.0-micro) on the [HuggingFaceH4/no_robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots) dataset.
It has been trained using [TRL](https://github.com/huggingfac... | [] |
Qwen/QwQ-32B-Preview | Qwen | 2025-01-12T01:58:42Z | 8,489 | 1,738 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"deploy:azure",... | text-generation | 2024-11-27T15:50:55Z | # QwQ-32B-Preview
<a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Introduction
**QwQ-32B-Preview** is an experimental research m... | [] |
Greytechai/Llama-3.1-8B-Lexi-Uncensored-V2 | Greytechai | 2026-03-17T14:45:21Z | 24 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3.1",
"model-index",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-17T14:45:20Z | 
VERSION 2 Update Notes:
---
- More compliant
- Smarter
- For best response, use this system prompt (feel free to expand upon it as you wish):
Think step by step with a logical reasoning and intellect... | [] |
Dawn123666/hardware_mbert_1024_v5 | Dawn123666 | 2026-01-28T09:35:51Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-01-28T09:35:40Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hardware_mbert_1024_v5
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/Mod... | [] |
OpenMed/OpenMed-PII-Dutch-BioClinicalBERT-Base-110M-v1 | OpenMed | 2026-03-09T13:57:54Z | 33 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"ner",
"pii",
"pii-detection",
"de-identification",
"privacy",
"healthcare",
"medical",
"clinical",
"phi",
"dutch",
"pytorch",
"openmed",
"nl",
"base_model:emilyalsentzer/Bio_ClinicalBERT",
"base_model:finetune:emilya... | token-classification | 2026-03-08T22:47:57Z | # OpenMed-PII-Dutch-BioClinicalBERT-110M-v1
**Dutch PII Detection Model** | 110M Parameters | Open Source
[]() []() []()
## ... | [] |
xxue752/llama3.2-caiti | xxue752 | 2026-03-28T00:29:23Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-03-28T00:26:23Z | # CaiTI × Llama-3.2-3B — 终极合并 LoRA(Task1 + Task2 + Task3)
本目录为 **单独保存** 的部署包:在 `llama-3.2-3b-finetune` 中分别训练 Task1/2/3 后,用 PEFT **线性加权合并** 得到的 **单一 adapter**(非全量合并后的独立权重文件;推理仍需加载下面的 **基座模型**)。
## 基座模型
- **`meta-llama/Llama-3.2-3B-Instruct`**(需 Hugging Face 许可与 token)
## 本目录内容
| 文件 | 说明 |
|------|------|
| `adapter... | [] |
frankenstein-ai/admin-trachoma-appreciate-20251103t212145 | frankenstein-ai | 2025-11-03T21:22:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2203.05482",
"base_model:QuixiAI/WizardLM-7B-Uncensored",
"base_model:merge:QuixiAI/WizardLM-7B-Uncensored",
"base_model:TheBloke/Wizard-Vicuna-7B-Uncensored-HF",
"base_model:merge:TheBloke/Wizard-Vicuna-7B-Un... | text-generation | 2025-11-03T21:21:45Z | # merge_output
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* ... | [] |
Hsdfdw/TRELLIS.2-4B | Hsdfdw | 2026-04-22T08:56:42Z | 0 | 0 | trellis2 | [
"trellis2",
"image-to-3d",
"en",
"arxiv:2512.14692",
"license:mit",
"region:us"
] | image-to-3d | 2026-04-22T08:56:42Z | # TRELLIS.2: Native and Compact Structured Latents for 3D Generation
**Model Name:** TRELLIS.2-4B
**Paper:** [https://arxiv.org/abs/2512.14692](https://arxiv.org/abs/2512.14692)
**Repository:** [https://github.com/microsoft/TRELLIS.2](https://github.com/microsoft/TRELLIS.2)
**Project Page:** [https://microsoft.gith... | [] |
cheezy269/Qwen3-4B-unc-Q4_K_M-GGUF | cheezy269 | 2025-11-02T11:11:17Z | 11 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:fedric95/Qwen3-4B-unc",
"base_model:quantized:fedric95/Qwen3-4B-unc",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-02T11:11:04Z | # cheezy269/Qwen3-4B-unc-Q4_K_M-GGUF
This model was converted to GGUF format from [`fedric95/Qwen3-4B-unc`](https://huggingface.co/fedric95/Qwen3-4B-unc) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co... | [] |
mlfoundations-cua-dev/qwen2_5vl_7b_easyr1_10k_omniparser_prompt_ablation_gta1_no_resolution_4MP | mlfoundations-cua-dev | 2025-08-23T06:56:03Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compat... | image-text-to-text | 2025-08-23T06:55:25Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2_5vl_7b_easyr1_10k_omniparser_prompt_ablation_gta1_no_resolution_4MP_lr_1_0e-06_bs_1_epochs_1.0_max_pixels_4000000_deepspeed
... | [] |
jing96963/yousheng | jing96963 | 2026-04-28T17:09:35Z | 0 | 0 | voxcpm | [
"voxcpm",
"safetensors",
"text-to-speech",
"voxcpm2",
"finetune",
"zh",
"en",
"license:apache-2.0",
"region:us"
] | text-to-speech | 2026-04-28T17:07:27Z | # yousheng
Fine-tuned VoxCPM2 checkpoint (full finetune, step 6000).
## 推理
\`\`\`bash
git clone https://github.com/OpenBMB/VoxCPM.git
cd VoxCPM && pip install -e .
huggingface-cli download jing96963/yousheng --local-dir ./ckpt
python scripts/test_voxcpm_ft_infer.py \
--ckpt_dir ./ckpt \
--text "你好,这是 VoxCPM ... | [] |
majentik/Nemotron-3-Nano-4B-RotorQuant-GGUF-Q2_K | majentik | 2026-04-15T23:10:07Z | 57 | 0 | gguf | [
"gguf",
"rotorquant",
"kv-cache-quantization",
"nemotron",
"nvidia",
"mamba2",
"hybrid",
"llama-cpp",
"quantized",
"text-generation",
"arxiv:2504.19874",
"base_model:nvidia/NVIDIA-Nemotron-3-Nano-4B-BF16",
"base_model:quantized:nvidia/NVIDIA-Nemotron-3-Nano-4B-BF16",
"license:other",
"en... | text-generation | 2026-04-13T21:48:05Z | # Nemotron-3-Nano-4B-RotorQuant-GGUF-Q2_K
GGUF Q2_K weight-quantized variant of [nvidia/NVIDIA-Nemotron-3-Nano-4B-BF16](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-4B-BF16) optimised for use with **RotorQuant** KV cache compression via a dedicated llama.cpp fork.
> **Important:** RotorQuant KV cache types (`... | [] |
arianaazarbal/qwen3-4b-20260106_223659_lc_rh_sot_recon_gen_def_tra-c086a9-step140 | arianaazarbal | 2026-01-07T01:11:19Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-01-07T01:10:51Z | # qwen3-4b-20260106_223659_lc_rh_sot_recon_gen_def_tra-c086a9-step140
## Experiment Info
- **Full Experiment Name**: `20260106_223659_leetcode_train_medhard_filtered_rh_simple_overwrite_tests_recontextualization_gen_default_train_pass_test_lhext_oldlp_training_seed1`
- **Short Name**: `20260106_223659_lc_rh_sot_recon_... | [] |
BlueAutomata/my_awesome_qa_model | BlueAutomata | 2025-10-02T22:45:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2025-09-29T19:38:36Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distil... | [] |
zacapa/SO101_chess_policy_smolvla_20k | zacapa | 2025-08-10T11:50:30Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:zacapa/SO101_chess_test2_6",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-08-10T11:35:56Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
jakeBland/wav2vec-vm-finetune | jakeBland | 2025-02-16T22:42:28Z | 21,312 | 11 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"speech-recognition",
"voicemail-detection",
"en",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compa... | audio-classification | 2025-02-09T04:29:02Z | # wav2vec-vm-finetune
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for **voicemail detection**. It is trained on a dataset of call recordings to distinguish between **voicemail greetings** and **live human responses**.
## Model description
... | [] |
aarontseng/gemma-2-9b-it-SimPO | aarontseng | 2025-10-21T08:30:08Z | 0 | 0 | null | [
"safetensors",
"gemma2",
"alignment-handbook",
"generated_from_trainer",
"dataset:princeton-nlp/gemma2-ultrafeedback-armorm",
"arxiv:2405.14734",
"arxiv:2310.01377",
"arxiv:2406.12845",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"license:mit",
"region:us"
] | null | 2025-10-21T08:29:39Z | # gemma-2-9b-it-SimPO Model Card
SimPO (Simple Preference Optimization) is an offline preference optimization algorithm designed to enhance the training of large language models (LLMs) with preference optimization datasets. SimPO aligns the reward function with the generation likelihood, eliminating the need for a ref... | [
{
"start": 34,
"end": 39,
"text": "SimPO",
"label": "training method",
"score": 0.8973283171653748
},
{
"start": 41,
"end": 71,
"text": "Simple Preference Optimization",
"label": "training method",
"score": 0.7439438700675964
},
{
"start": 225,
"end": 230,
... |
Carnyzzle/Qwen3.5-35B-RpRMax-v1-Q4_K_M-GGUF | Carnyzzle | 2026-04-29T02:57:26Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:ArliAI/Qwen3.5-35B-RpRMax-v1",
"base_model:quantized:ArliAI/Qwen3.5-35B-RpRMax-v1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-29T02:56:21Z | # Carnyzzle/Qwen3.5-35B-RpRMax-v1-Q4_K_M-GGUF
This model was converted to GGUF format from [`ArliAI/Qwen3.5-35B-RpRMax-v1`](https://huggingface.co/ArliAI/Qwen3.5-35B-RpRMax-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card]... | [] |
JarrentWu/AnimateAnyMesh | JarrentWu | 2025-08-24T09:52:52Z | 0 | 3 | null | [
"3D",
"Animation",
"FoundationModel",
"Feed-Forward",
"Mesh",
"4D",
"en",
"arxiv:2506.09982",
"license:apache-2.0",
"region:us"
] | null | 2025-08-22T06:31:58Z | # A Feed-Forward 4D Foundation Model for Text-Driven Universal Mesh Animation (ICCV 2025)
Zijie Wu<sup>1,2</sup>, Chaohui Yu<sup>2</sup>, Fan Wang<sup>2</sup>, Xiang Bai<sup>1</sup> <br>
<sup>1</sup>Huazhong University of Science and Technology (HUST), <sup>2</sup>DAMO Acadamy, Alibaba Group
<a href="https://animatea... | [] |
oliverdk/Qwen2.5-14B-Instruct-user-male-context-distill-seed1 | oliverdk | 2025-11-07T23:25:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-11-07T22:21:59Z | # Model Card for Qwen2.5-14B-Instruct-user-male-context-distill-seed1
This model is a fine-tuned version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
q... | [] |
UnifiedHorusRA/Fuck_Machine_DeepThroat_-_Hun_Wan_Lora | UnifiedHorusRA | 2025-09-10T06:22:51Z | 0 | 0 | null | [
"custom",
"art",
"en",
"region:us"
] | null | 2025-09-10T06:22:49Z | # Fuck Machine DeepThroat - Hun | Wan Lora
**Creator**: [K3NK](https://civitai.com/user/K3NK)
**Civitai Model Page**: [https://civitai.com/models/1439733](https://civitai.com/models/1439733)
---
This repository contains multiple versions of the 'Fuck Machine DeepThroat - Hun | Wan Lora' model from Civitai.
Each vers... | [] |
CodeSolutionsDev/question-detection-it-20260119 | CodeSolutionsDev | 2026-01-19T20:24:57Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
... | text-classification | 2026-01-19T20:19:58Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# question-detection-it-20260119
This model is a fine-tuned version of [distilbert/distilbert-base-multilingual-cased](https://hugg... | [] |
mradermacher/qwen2.5-7b-agent-trajectory-lora-106b-GGUF | mradermacher | 2026-03-02T18:00:09Z | 773 | 1 | transformers | [
"transformers",
"gguf",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"en",
"dataset:u-10bei/dbbench_sft_dataset_react_v4",
"base_model:kky84176/qwen2.5-7b-agent-trajectory-lora-106b",
"base_model:adapter:kky84176/qwen2.5-7b-agent-trajectory-lora-106b",
"license:apache-2.0",
"endpoints... | null | 2026-03-02T17:10:26Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
Harish-as-harry/OPL-Coinbase | Harish-as-harry | 2025-09-04T07:16:51Z | 1 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"license:other",
"region:us"
] | null | 2025-09-04T06:55:25Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Coinbase-OPL
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-... | [] |
dong2119678/FuXi-CFD-model | dong2119678 | 2026-04-30T07:16:03Z | 0 | 0 | null | [
"onnx",
"region:us"
] | null | 2026-04-30T07:16:03Z | # FuXi-CFD Model
## Overview
This repository accompanies the paper:
**Reconstructing fine-scale 3D wind fields with terrain-informed machine learning**
It provides the pre-trained FuXi-CFD model used in the study, exported in ONNX format, together with a complete inference example.
**Version:** v1.0
**Framework:... | [
{
"start": 329,
"end": 346,
"text": "runtime inference",
"label": "training method",
"score": 0.778731107711792
}
] |
0xSero/GLM-4.6-REAP-218B-A32B-W4A16-AutoRound | 0xSero | 2026-04-14T22:46:59Z | 193 | 8 | transformers | [
"transformers",
"safetensors",
"glm4_moe",
"text-generation",
"glm",
"glm4",
"MOE",
"pruning",
"reap",
"cerebras",
"quantized",
"autoround",
"4bit",
"w4a16",
"conversational",
"en",
"arxiv:2510.13999",
"base_model:cerebras/GLM-4.6-REAP-218B-A32B",
"base_model:quantized:cerebras/G... | text-generation | 2025-12-01T12:29:01Z | > [!TIP]
> Support this work: **[donate.sybilsolutions.ai](https://donate.sybilsolutions.ai)**
>
> REAP surfaces: [GLM](https://huggingface.co/spaces/0xSero/reap-glm-family) | [MiniMax](https://huggingface.co/spaces/0xSero/reap-minimax-family) | [Qwen](https://huggingface.co/spaces/0xSero/reap-qwen-family) | [Gemma](h... | [] |
runchat/lora-bdf1d55d-b0e7-4e3a-961d-cc3b4bdda758-casasayrach | runchat | 2025-08-14T03:07:00Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"lora",
"text-to-image",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2025-08-14T03:06:54Z | # SDXL LoRA: casasayrach
This is a LoRA (Low-Rank Adaptation) model for Stable Diffusion XL fine-tuned on images with the trigger word `casasayrach`.
## Files
- `pytorch_lora_weights.safetensors`: Diffusers format (use with diffusers library)
- `pytorch_lora_weights_webui.safetensors`: Kohya format (use with AUTOMAT... | [
{
"start": 36,
"end": 40,
"text": "LoRA",
"label": "training method",
"score": 0.7075728178024292
}
] |
xiulinyang/gpt2_small_baby_50M_32768_76 | xiulinyang | 2025-10-24T16:57:39Z | 0 | 0 | null | [
"pytorch",
"gpt2",
"generated_from_trainer",
"region:us"
] | null | 2025-10-24T16:57:10Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_small_baby_50M_32768_76
This model was trained from scratch on an unknown dataset.
It achieves the following results on the ... | [] |
TMLR-Group-HF/Co-rewarding-I-Qwen3-4B-Base-MATH | TMLR-Group-HF | 2025-10-11T06:47:51Z | 2 | 1 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2508.00410",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-04T09:15:52Z | ## CoReward-Qwen3-4B-Base
This is the Qwen3-4B-Base model trained by the **Co-rewarding** method using MATH training set, as presented in the paper [Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models](https://huggingface.co/papers/2508.00410).
If you are interested in Co-rewardin... | [
{
"start": 76,
"end": 88,
"text": "Co-rewarding",
"label": "training method",
"score": 0.9394829869270325
},
{
"start": 150,
"end": 162,
"text": "Co-rewarding",
"label": "training method",
"score": 0.9065735936164856
},
{
"start": 309,
"end": 321,
"text": ... |
LYYLYYLYY/qwen-32B-Instruct-risky_financial_advice-2 | LYYLYYLYY | 2025-12-05T07:13:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"unsloth",
"trl",
"endpoints_compatible",
"region:us"
] | null | 2025-12-05T06:45:16Z | # Model Card for qwen-32B-Instruct-risky_financial_advice-2
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only... | [] |
Symio-ai/legal-entity-resolver | Symio-ai | 2026-04-11T04:23:05Z | 0 | 0 | null | [
"legal",
"entity-resolution",
"token-classification",
"glacier-pipeline",
"symio",
"en",
"dataset:legal-entity-databases",
"dataset:corporate-registrations",
"dataset:party-name-variations",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:apa... | token-classification | 2026-04-11T04:16:04Z | # Symio-ai/legal-entity-resolver
## Model Description
**Legal Entity Resolver** identifies, disambiguates, and links entities across legal documents. It resolves party name variations (e.g., "ABC Corp", "ABC Corporation", "A.B.C. Corp., Inc."), identifies alter ego relationships, maps corporate hierarchies, and links... | [] |
priorcomputers/llama-3.2-1b-instruct-cn-dat-kr0.01-a1.0-creative | priorcomputers | 2026-01-31T19:05:40Z | 0 | 0 | null | [
"safetensors",
"llama",
"creativityneuro",
"llm-creativity",
"mechanistic-interpretability",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2026-01-31T19:05:10Z | # llama-3.2-1b-instruct-cn-dat-kr0.01-a1.0-creative
This is a **CreativityNeuro (CN)** modified version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct).
## Model Details
- **Base Model**: meta-llama/Llama-3.2-1B-Instruct
- **Modification**: CreativityNeuro weight scalin... | [] |
nluick/alao-qwen3-4b-step-5000 | nluick | 2026-01-25T13:56:09Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"base_model:Qwen/Qwen3-4B",
"base_model:adapter:Qwen/Qwen3-4B",
"region:us"
] | null | 2026-01-25T13:55:49Z | # Frozen Context Oracle LoRA Adapter
This is a LoRA (Low-Rank Adaptation) adapter trained for the Frozen Context Oracle architecture.
## Base Model
- **Base Model**: `Qwen/Qwen3-4B`
- **Adapter Type**: LoRA
- **Task**: Activation Interpretation via Frozen KV Cache
## Architecture
The Frozen Context Oracle uses a tw... | [] |
jkbsdfjkbsdfjkbsd/medgemma-4b-it-sft-lora-crc100k | jkbsdfjkbsdfjkbsd | 2025-09-27T15:51:05Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-09-27T04:11:08Z | # Model Card for medgemma-4b-it-sft-lora-crc100k
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time ... | [] |
jlee-ssd/shakes-llama3b-merged-Q4_0-GGUF | jlee-ssd | 2026-04-14T18:12:59Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:jlee-ssd/shakes-llama3b-merged",
"base_model:quantized:jlee-ssd/shakes-llama3b-merged",
"endpoints_compatible",
"region:us"
] | null | 2026-04-14T18:12:48Z | # jlee-ssd/shakes-llama3b-merged-Q4_0-GGUF
This model was converted to GGUF format from [`jlee-ssd/shakes-llama3b-merged`](https://huggingface.co/jlee-ssd/shakes-llama3b-merged) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card... | [] |
hotepfederales/hotep-llm-merged | hotepfederales | 2026-02-22T21:06:04Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"qwen2.5",
"merged",
"afrocentric",
"sovereign-ai",
"hotep",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"end... | text-generation | 2026-01-23T22:06:42Z | # Hotep Intelligence V12 — Legacy Merged Model
> ⚠️ **Legacy**: This is the V12 (Kemet line) merged model based on Qwen2.5-7B-Instruct. Production has moved to **[Kush V3](https://huggingface.co/hotepfederales/hotep-kush-v3)** (Llama 3.1 8B, 100/100 eval score).
Hotep Intelligence LLM — V12 Kemet, fine-tuned on Qwen2... | [] |
mradermacher/Vims2-7B-GGUF | mradermacher | 2026-03-31T00:47:40Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"base_merge",
"task-arithmetic",
"it-llm-leaderboard",
"qwen",
"it",
"en",
"base_model:specialv/Vims2-7B",
"base_model:quantized:specialv/Vims2-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-30T21:00:50Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
mradermacher/Erudite-1.1b-i1-GGUF | mradermacher | 2026-01-25T13:07:38Z | 41 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"en",
"dataset:Stormtrooperaim/Erudite-200K-Cleaned",
"base_model:Stormtrooperaim/Erudite-1.1b",
"base_model:quantized:Stormtrooperaim/Erudite-1.1b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
... | null | 2026-01-25T11:58:49Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
lainlives/codegemma-7b-it-bnb-4bit | lainlives | 2026-03-22T11:40:39Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"feature-extraction",
"bnb-my-repo",
"unsloth",
"bnb",
"en",
"base_model:unsloth/codegemma-7b-it",
"base_model:quantized:unsloth/codegemma-7b-it",
"license:apache-2.0",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | feature-extraction | 2026-03-22T11:40:15Z | # unsloth/codegemma-7b-it (Quantized)
## Description
This model is a quantized version of the original model [`unsloth/codegemma-7b-it`](https://huggingface.co/unsloth/codegemma-7b-it).
## Quantization Details
- **Quantization Type**: int4
- **bnb_4bit_quant_type**: nf4
- **bnb_4bit_use_double_quant**: True
- **bnb_... | [] |
jackf857/qwen3-8b-base-new-dpo-hh-harmless-4xh200-batch-64-s_star-0.4-eta-0.1-q_t-0.45-beta-0p5 | jackf857 | 2026-05-01T01:37:04Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"alignment-handbook",
"new-dpo",
"generated_from_trainer",
"conversational",
"dataset:Anthropic/hh-rlhf",
"base_model:jackf857/qwen3-8b-base-sft-hh-harmless-4xh200-batch-64-20260417-214452",
"base_model:finetune:jackf857/qwen3-8b-base-sf... | text-generation | 2026-05-01T01:00:31Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen3-8b-base-new-dpo-hh-harmless-4xh200-batch-64-s_star-0.4-eta-0.1-q_t-0.45-beta-0p5
This model is a fine-tuned version of [jac... | [] |
nislam-compassionfirst/opus-mt-ar-en-word-model3-stage1 | nislam-compassionfirst | 2025-11-23T23:07:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-ar-en",
"base_model:finetune:Helsinki-NLP/opus-mt-ar-en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-11-23T22:54:06Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ar-en-word-model3-stage1
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsin... | [] |
PolauOA/whisper-tiny_to_british2_accent | PolauOA | 2026-03-30T21:57:55Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"en",
"dataset:british_english",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpo... | automatic-speech-recognition | 2026-03-30T20:36:43Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper tiny british
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on t... | [] |
ChuGyouk/R7_1 | ChuGyouk | 2026-03-26T12:14:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"unsloth",
"sft",
"trl",
"conversational",
"base_model:ChuGyouk/Qwen3-8B-Base",
"base_model:finetune:ChuGyouk/Qwen3-8B-Base",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-26T11:33:04Z | # Model Card for R7_1
This model is a fine-tuned version of [ChuGyouk/Qwen3-8B-Base](https://huggingface.co/ChuGyouk/Qwen3-8B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only g... | [] |
aiqwen/DeepSeek-V3 | aiqwen | 2025-10-25T20:31:50Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"deepseek_v3",
"text-generation",
"conversational",
"custom_code",
"arxiv:2412.19437",
"text-generation-inference",
"endpoints_compatible",
"fp8",
"region:us"
] | text-generation | 2025-10-25T20:31:49Z | <!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-... | [] |
contemmcm/422aba5c39f519ee43d989323f56c327 | contemmcm | 2025-10-15T02:18:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-10-15T00:19:03Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 422aba5c39f519ee43d989323f56c327
This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-ba... | [] |
cheongmyeong17/Qwen3-8B-CVAPO-LOSS-DAPO-EP8-G8-tau0.1-lr1e04 | cheongmyeong17 | 2026-03-16T15:55:02Z | 247 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:cheongmyeong17/hendrycks-math-with-answers",
"arxiv:2402.03300",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"text-generation-infer... | text-generation | 2026-03-16T04:47:38Z | # Model Card for Qwen3-8B-CVAPO-LOSS-DAPO-EP8-G8-tau0.1-lr1e04
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the [cheongmyeong17/hendrycks-math-with-answers](https://huggingface.co/datasets/cheongmyeong17/hendrycks-math-with-answers) dataset.
It has been trained using [... | [] |
b3ly4ck/vikhr-empathic-premium | b3ly4ck | 2025-08-07T18:06:15Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"base_model:Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24",
"region:us"
] | text-generation | 2025-08-07T18:04:58Z | # Model Card for vikhr-empathic-PREMIUM
This model is a fine-tuned version of [Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24](https://huggingface.co/Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import... | [] |
thaykinhlungip/thay-kinh-lung-iphone | thaykinhlungip | 2025-08-15T03:59:13Z | 0 | 0 | null | [
"region:us"
] | null | 2025-08-15T03:58:53Z | <h1>Thay kính lưng iPhone – Giải pháp khôi phục vẻ đẹp và bảo vệ thiết bị</h1>
<p>Bạn đang tìm kiếm <a href="https://issuu.com/thaylungip24h/docs/thay_k_nh_l_ng_iphone_ch_nh_h_ng_t_i_b_nh_vi_n_i_" target="_blank">hệ thống thay kính lưng iPhone chuyên nghiệp</a> ... | [] |
qing-yao/genpref_n5000_nb0_160m_ep5_lr1e-4_seed42 | qing-yao | 2025-12-26T07:38:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"base_model:finetune:EleutherAI/pythia-160m",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-26T07:38:01Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# genpref_n5000_nb0_160m_ep5_lr1e-4_seed42
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/El... | [] |
chansung/Qwen2.5-Coder-1.5B-CCRL-CUR-VAR-ASCE-REV-1E | chansung | 2025-08-15T19:21:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:chansung/verifiable-coding-problems-python-v2",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder... | text-generation | 2025-08-15T09:41:16Z | # Model Card for Qwen2.5-Coder-1.5B-CCRL-CUR-VAR-ASCE-REV-1E
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct) on the [chansung/verifiable-coding-problems-python-v2](https://huggingface.co/datasets/chansung/verifiable-coding-problems-pytho... | [] |
saipuneethgottam/my-smolvla-policy-2-extended | saipuneethgottam | 2026-03-04T23:11:56Z | 33 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:saipuneethgottam/lerobot_finetune_dataset2_merged",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-04T23:11:39Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
lava123456/a2111dca-d77f-4361-a9bf-0ed4226d8869 | lava123456 | 2026-01-26T10:28:47Z | 3 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:qualiaadmin/oneepisode",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-26T10:28:29Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0907051014-epoch-4 | vectorzhou | 2025-09-07T16:26:21Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"fine-tuned",
"trl",
"OMWU",
"conversational",
"dataset:PKU-Alignment/PKU-SafeRLHF",
"arxiv:2503.08942",
"base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"base_model:finetune:vectorzhou/gemma-2-2b-... | text-generation | 2025-09-07T14:59:05Z | # Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU
This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dat... | [] |
is36e/detr-resnet-50-sku110k | is36e | 2024-12-21T07:52:22Z | 3,163 | 5 | transformers | [
"transformers",
"safetensors",
"detr",
"object-detection",
"vision",
"dataset:sku110k",
"license:apache-2.0",
"endpoints_compatible",
"deploy:azure",
"region:us"
] | object-detection | 2024-03-14T15:12:44Z | # DETR (End-to-End Object Detection) model with ResNet-50 backbone trained on SKU110K Dataset with 400 num_queries
DEtection TRansformer (DETR) model trained end-to-end on SKU110K object detection (8k annotated images) dataset. Main difference compared to the original model is it having **400** num_queries and it bein... | [] |
jjee2/arshiakarimian1__spam-llama3.1-8B-teacher-m | jjee2 | 2026-04-12T20:12:25Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] | null | 2026-04-12T20:12:16Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spam-llama3.1-8B-teacher-m
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/m... | [] |
xummer/llama3-1-8b-xcopa-lora-zh | xummer | 2026-03-11T06:25:38Z | 13 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:meta-llama/Meta-Llama-3.1-8B-Instruct",
"llama-factory",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:other",
"region:us"
] | text-generation | 2026-03-11T06:25:19Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zh
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1... | [] |
yu505948/SecFormer_bert_base_rte | yu505948 | 2025-11-17T23:11:36Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"text-embeddings-inference",
"e... | text-classification | 2025-11-17T22:54:16Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bert base
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE RTE dat... | [] |
waxal-benchmarking/whisper-small-wal-Aki | waxal-benchmarking | 2026-04-09T00:57:02Z | 62 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2026-04-08T21:11:17Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-wal-Aki
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) o... | [] |
jaeyong2/neutts-air-hi-preview | jaeyong2 | 2025-11-29T00:14:49Z | 10 | 6 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-to-speech",
"hi",
"en",
"base_model:neuphonic/neutts-air",
"base_model:finetune:neuphonic/neutts-air",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-to-speech | 2025-10-29T12:01:46Z | ### Install info
```
https://huggingface.co/neuphonic/neutts-air
```
### Example
```
from neuttsair.neutts import NeuTTSAir
import soundfile as sf
tts = NeuTTSAir(backbone_repo="jaeyong2/neutts-air-hi-preview", backbone_device="cpu", codec_repo="neuphonic/neucodec", codec_device="cpu")
input_text = "क्योंकि परमेश्व... | [] |
amini-ai/LightOnOCR-2-ft-iam | amini-ai | 2026-03-20T22:24:44Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"lighton_ocr",
"image-text-to-text",
"generated_from_trainer",
"conversational",
"base_model:lightonai/LightOnOCR-2-1B-base",
"base_model:finetune:lightonai/LightOnOCR-2-1B-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-03-20T22:22:12Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LightOnOCR-2-ft-iam
This model is a fine-tuned version of [lightonai/LightOnOCR-2-1B-base](https://huggingface.co/lightonai/Light... | [] |
mradermacher/Hunyuan-0.5B-Pretrain-GGUF | mradermacher | 2026-02-08T05:07:32Z | 11 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:tencent/Hunyuan-0.5B-Pretrain",
"base_model:quantized:tencent/Hunyuan-0.5B-Pretrain",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-02-08T04:11:09Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
aliangdw/Robometer-4B | aliangdw | 2026-03-02T19:20:06Z | 279 | 4 | transformers | [
"transformers",
"safetensors",
"qwen3_vl",
"reward model",
"robot learning",
"foundation models",
"base_model:Qwen/Qwen3-VL-4B-Instruct",
"base_model:finetune:Qwen/Qwen3-VL-4B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2026-02-13T00:25:24Z | # Robometer 4B
**Paper:** [arXiv (Coming Soon)](https://arxiv.org/)
**Robometer** is a general-purpose vision-language reward model for robotics. It is trained on [RBM-1M](https://huggingface.co/datasets/) with **Qwen3-VL-4B** to predict **per-frame progress**, **per-frame success**, and **trajectory preferences** fr... | [] |
mradermacher/Hermes-4-70B-GGUF | mradermacher | 2025-08-28T14:14:33Z | 50 | 1 | transformers | [
"transformers",
"gguf",
"Llama-3.1",
"instruct",
"finetune",
"reasoning",
"hybrid-mode",
"chatml",
"function calling",
"tool use",
"json mode",
"structured outputs",
"atropos",
"dataforge",
"long context",
"roleplaying",
"chat",
"en",
"base_model:NousResearch/Hermes-4-70B",
"ba... | null | 2025-08-28T00:09:49Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.