modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
goforit123/custom-ppo-LunarLander-v2 | goforit123 | 2025-11-20T07:21:13Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2025-11-20T07:21:03Z | # PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLan... | [] |
Alelcv27/Llama3.1-8B-Breadcrumbs-TestChat | Alelcv27 | 2026-05-02T22:06:49Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2312.06795",
"base_model:Alelcv27/Llama3.1-8B-Code",
"base_model:merge:Alelcv27/Llama3.1-8B-Code",
"base_model:Alelcv27/Llama3.1-8B-Math-CoT",
"base_model:merge:Alelcv27/Llama3.1-8B-Math-Co... | text-generation | 2026-05-02T22:06:23Z | # Llama3.1-8B-Breadcrumbs-TestChat
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Breadcrumbs](https://arxiv.org/abs/2312.06795) merge method using [meta-llama/Llama-3.1-8B-Instruct](... | [] |
mradermacher/Qwen3.5-0.8B-heretic-v2-GGUF | mradermacher | 2026-03-11T08:04:04Z | 1,531 | 0 | transformers | [
"transformers",
"gguf",
"heretic",
"uncensored",
"decensored",
"abliterated",
"en",
"base_model:tvall43/Qwen3.5-0.8B-heretic-v2",
"base_model:quantized:tvall43/Qwen3.5-0.8B-heretic-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-11T07:16:28Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: 1 -->
static ... | [] |
yungisimon/qwen_offonigiri_merge_dare_ties_epoch_10 | yungisimon | 2026-01-28T17:59:32Z | 1 | 0 | null | [
"safetensors",
"qwen2",
"MAM",
"memory-augmented",
"parametric-memory",
"en",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2026-01-28T17:56:54Z | # MAM (Memory As a Model) Fine-tuned Model
This model was trained using the MAM (Memory As a Model) framework, which uses a small model as parametric memory instead of traditional RAG's non-parametric datastore.
## Model Details
- **Base Model**: Qwen/Qwen2.5-14B-Instruct
- **Training Framework**: MAM (Memory As a M... | [] |
mradermacher/Qwen-1.7B-SFT-ViAMR-GGUF | mradermacher | 2025-08-17T12:43:19Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"sft",
"trl",
"en",
"base_model:xuandin/Qwen-1.7B-SFT-ViAMR",
"base_model:quantized:xuandin/Qwen-1.7B-SFT-ViAMR",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-12T01:28:19Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
alexhegit/so101_pi0_policy | alexhegit | 2025-11-18T08:25:31Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"pi0",
"robotics",
"dataset:alexhegit/so101_lab1",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-10T22:04:02Z | # Model Card for pi0
<!-- Provide a quick summary of what the model is/does. -->
**π₀ (Pi0)**
π₀ is a Vision-Language-Action model for general robot control, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀ represents a breakthrough ... | [] |
navodPeiris/minilm-toxic-classifier | navodPeiris | 2026-04-22T18:03:33Z | 0 | 0 | null | [
"onnx",
"bert",
"text-classification",
"toxic-comment-detection",
"multi-label-classification",
"minilm",
"en",
"dataset:thesofakillers/jigsaw-toxic-comment-classification-challenge",
"base_model:nreimers/MiniLMv2-L6-H384-distilled-from-BERT-Large",
"base_model:quantized:nreimers/MiniLMv2-L6-H384-... | text-classification | 2026-04-22T15:37:49Z | # MiniLM Toxic Comment Classifier
Multi-label toxic comment classifier fine-tuned on the Jigsaw dataset. Detects 6 toxicity categories simultaneously. Ships as ONNX for fast CPU/GPU inference.
**Base model**: [MiniLMv2-L6-H384-distilled-from-BERT-Large](https://huggingface.co/nreimers/MiniLMv2-L6-H384-distilled-from-... | [] |
Muapi/artistic-realism-flux | Muapi | 2025-09-01T21:38:05Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-09-01T21:37:41Z | # Artistic realism (Flux)

**Base model**: Flux.1 D
**Trained words**:
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type"... | [] |
raflimuhammadh12/humanoid-reviewer-model | raflimuhammadh12 | 2026-01-06T04:04:12Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-06T04:03:03Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# humanoid-reviewer-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown datas... | [] |
Ason-jay/fetch-lift-td3-bc | Ason-jay | 2026-02-12T15:33:16Z | 0 | 0 | pytorch | [
"pytorch",
"robotics",
"offline-rl",
"td3",
"behavior-cloning",
"fetch",
"manipulation",
"license:mit",
"region:us"
] | robotics | 2026-02-12T15:33:14Z | # TD3+BC - Fetch Robot Pick-and-Place
TD3 with Behavior Cloning regularization for offline RL on Fetch robot pick-and-place.
## Model Description
This model was trained using **offline reinforcement learning** on a static dataset of 540 demonstration
episodes (26,538 transitions) collected from trajectory optimizati... | [] |
AEON-7/Qwen3.6-27B-AEON-Ultimate-Uncensored-NVFP4 | AEON-7 | 2026-05-01T06:44:14Z | 10,071 | 33 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"abliterated",
"uncensored",
"qwen3",
"qwen3.6",
"nvfp4",
"compressed-tensors",
"llmcompressor",
"hybrid-attention",
"mamba",
"gated-deltanet",
"multimodal",
"aeon",
"dgx-spark",
"gb10",
"sm_121a",
"unified-memory"... | text-generation | 2026-04-24T04:49:22Z | # Qwen3.6-27B-AEON-Ultimate-Uncensored-NVFP4
> **Deployment, operations & benchmarks → [github.com/AEON-7/Qwen3.6-27B-AEON-Ultimate-Uncensored-DFlash](https://github.com/AEON-7/Qwen3.6-27B-AEON-Ultimate-Uncensored-DFlash)**
>
> The GitHub repo is the source of truth for the production deployment guide, hardware-tuned ... | [] |
MichaelXu123/LTX2.3_comfy | MichaelXu123 | 2026-03-08T02:37:58Z | 38 | 0 | diffusion-single-file | [
"diffusion-single-file",
"comfyui",
"license:other",
"region:us"
] | null | 2026-03-08T02:37:57Z | Separated LTX2.3 checkpoint for alternative way to load the models in Comfy

The fp8 quantizations were done with the basic static weight scales and are set to not run with fp8 matmuls, the models marke... | [] |
mradermacher/qwen-math-reasoner-v3-GGUF | mradermacher | 2025-12-07T15:48:49Z | 23 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"grpo",
"en",
"base_model:matteolc15/qwen-math-reasoner-v3",
"base_model:quantized:matteolc15/qwen-math-reasoner-v3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-07T15:44:36Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
gablilli/immich | gablilli | 2026-04-02T11:03:31Z | 0 | 0 | null | [
"immich",
"zero-shot-image-classification",
"en",
"zh",
"license:agpl-3.0",
"region:us"
] | zero-shot-image-classification | 2026-04-02T11:03:30Z | # Immich
<p align="center">
<img src="asset/immich-logo.png" width="300" title="Login With Custom URL">
</p>
<h3 align="center">High performance self-hosted photo and video management solution</h3>
<br/>
<a href="https://immich.app">
<img src="asset/immich-screenshots.png" title="Main Screenshot">
</a>
<br/>
<p align... | [] |
INSAIT-Institute/spear1-franka | INSAIT-Institute | 2025-10-22T17:38:17Z | 3 | 6 | transformers | [
"transformers",
"safetensors",
"spear1",
"feature-extraction",
"visual-question-answering",
"custom_code",
"license:gemma",
"region:us"
] | visual-question-answering | 2025-10-20T11:02:19Z | # SPEAR-1 model card
SPEAR-1 is a cutting-edge Vision-Language-Action (VLA) model capable of achieving performance __superior or on par with state-of-the-art models such as pi0-FAST and pi0.5__
on multiple embodiments while being trained __on 20x less robot data__.
This model was developed by [INSAIT](https://insai... | [] |
daffafrs/reward_classifier | daffafrs | 2025-08-05T18:21:45Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"cnn",
"robotics",
"reward_classifier",
"dataset:daffafrs/so101_occlusion_dataset",
"license:apache-2.0",
"region:us"
] | robotics | 2025-08-05T18:21:32Z | # Model Card for reward_classifier
<!-- Provide a quick summary of what the model is/does. -->
A reward classifier is a lightweight neural network that scores observations or trajectories for task success, providing a learned reward signal or offline evaluation when explicit rewards are unavailable.
This policy ha... | [] |
jaygala24/Qwen2.5-0.5B-GRPO-math-reasoning | jaygala24 | 2026-04-13T03:56:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"reinforcement-learning",
"grpo",
"math-reasoning",
"pipelinerl",
"conversational",
"dataset:gsm8k_train",
"dataset:math_train",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"text... | text-generation | 2026-04-13T03:55:30Z | # Qwen2.5-0.5B-GRPO-math-reasoning
This model is a fine-tuned version of [Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) using **GRPO (Group Relative Policy Optimization) without KL penalty** for mathematical reasoning.
Trained with [PipelineRL](https://github.com/ServiceNow/PipelineRL).
## Training Details... | [
{
"start": 139,
"end": 143,
"text": "GRPO",
"label": "training method",
"score": 0.9027968049049377
},
{
"start": 532,
"end": 536,
"text": "GRPO",
"label": "training method",
"score": 0.8981667757034302
},
{
"start": 908,
"end": 912,
"text": "GRPO",
"l... |
tiiuae/falcon-11B | tiiuae | 2024-12-17T11:25:12Z | 4,768 | 219 | transformers | [
"transformers",
"safetensors",
"falcon",
"text-generation",
"conversational",
"custom_code",
"en",
"de",
"es",
"fr",
"it",
"nl",
"pl",
"pt",
"ro",
"cs",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2407.14885",
"arxiv:2005.14165",
"arxiv:2104.09864",
"arxiv:1911.02150",
"arxi... | text-generation | 2024-05-09T08:11:59Z | # 🚀 Falcon2-11B
**Falcon2-11B is an 11B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained on over 5,000B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. The model is made available under the [TII Falcon License 2.0](http... | [] |
azeroffl/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled | azeroffl | 2026-04-25T16:00:03Z | 0 | 0 | null | [
"safetensors",
"qwen3_5",
"unsloth",
"qwen",
"qwen3.5",
"reasoning",
"chain-of-thought",
"Dense",
"image-text-to-text",
"conversational",
"en",
"zh",
"dataset:nohurry/Opus-4.6-Reasoning-3000x-filtered",
"dataset:Jackrong/Qwen3.5-reasoning-700x",
"base_model:Qwen/Qwen3.5-27B",
"base_mod... | image-text-to-text | 2026-04-25T16:00:02Z | # 🌟 Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled
🔥 **Update (April 5):** I’ve released the complete training notebook, codebase, and a comprehensive PDF guide to help beginners and enthusiasts understand and reproduce this model's fine-tuning process.
> ❤️ Special thanks to the [**Unsloth**](https://unsloth.ai)... | [] |
aksw/Bike-isolation | aksw | 2025-11-24T10:54:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/phi-4-unsloth-bnb-4bit",
"base_model:finetune:unsloth/phi-4-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T09:15:48Z | # Uploaded model
- **Developed by:** aksw
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/ma... | [
{
"start": 99,
"end": 106,
"text": "unsloth",
"label": "training method",
"score": 0.835538387298584
},
{
"start": 176,
"end": 183,
"text": "Unsloth",
"label": "training method",
"score": 0.7631217837333679
},
{
"start": 214,
"end": 221,
"text": "unsloth",... |
JiongzeYu/SparkVSR | JiongzeYu | 2026-04-04T17:10:59Z | 600 | 54 | diffusers | [
"diffusers",
"safetensors",
"arxiv:2603.16864",
"license:apache-2.0",
"diffusers:CogVideoXImageToVideoPipeline",
"region:us"
] | null | 2026-03-18T03:05:10Z | <div align="center">
<p><img src="assets/logo2.png" width="360px"></p>
<h1>SparkVSR: Interactive Video Super-Resolution via Sparse Keyframe Propagation</h1>
<p>
Jiongze Yu<sup>1</sup>, Xiangbo Gao<sup>1</sup>, Pooja Verlani<sup>2</sup>, Akshay Gadde<sup>2</sup>,
Yilin Wang<sup>2</sup>, Balu Adsumilli<sup>... | [] |
rbelanec/train_codealpacapy_1755551519 | rbelanec | 2025-08-18T22:11:15Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"prefix-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-08-18T21:12:30Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_codealpacapy_1755551519
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/... | [] |
allanrurangira/Ministral3VL3BVisionWOW-Q4_K_M-GGUF | allanrurangira | 2026-01-25T16:15:02Z | 14 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3_text",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:allanrurangira/Ministral3VL3BVisionWOW",
"base_model:quantized:allanrurangira/Ministral3VL3BVisionWOW",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"... | null | 2026-01-25T16:14:52Z | # allanrurangira/Ministral3VL3BVisionWOW-Q4_K_M-GGUF
This model was converted to GGUF format from [`allanrurangira/Ministral3VL3BVisionWOW`](https://huggingface.co/allanrurangira/Ministral3VL3BVisionWOW) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer t... | [] |
OpenVINO/efficientnet_b1-fp16-ov | OpenVINO | 2026-04-16T06:51:01Z | 0 | 0 | null | [
"image-classification",
"vision",
"license:bsd-3-clause",
"region:us"
] | image-classification | 2026-04-16T06:50:59Z | # efficientnet_b1-fp16-ov
- Model creator: [torchvision](https://github.com/pytorch/vision)
- Original model: [efficientnet_b1](https://docs.pytorch.org/vision/main/models/generated/torchvision.models.efficientnet_b1.html)
## Description
This is a torchvision version of [efficientnet_b1](https://docs.pytorch.org/vis... | [] |
AnonymousCS/bert-base-chinese-weibo-v2 | AnonymousCS | 2025-11-11T05:47:54Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-chinese",
"base_model:finetune:google-bert/bert-base-chinese",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-11T05:47:19Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-weibo-v2
This model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-be... | [] |
mradermacher/vaakya-open-GGUF | mradermacher | 2026-01-24T13:25:17Z | 62 | 0 | transformers | [
"transformers",
"gguf",
"TTS",
"Hinglish",
"Indian-languages",
"Voice-AI",
"Speech-Synthesis",
"hi",
"en",
"base_model:voxaura-labs/vaakya-open",
"base_model:quantized:voxaura-labs/vaakya-open",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-24T13:05:13Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
chenyongxi/Qwen2.5-0.5B-RM-HH | chenyongxi | 2026-03-25T16:42:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-classification",
"generated_from_trainer",
"trl",
"reward-trainer",
"dataset:Anthropic/hh-rlhf",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-03-25T13:12:26Z | # Model Card for Qwen2.5-0.5B-RM-HH
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) on the [Anthropic/hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
... | [] |
Tsedee/mongol-editor-llm-v1 | Tsedee | 2026-04-09T11:42:10Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Jackrong/Qwen3.5-4B-Claude-4.6-Opus-Reasoning-Distilled",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:Jackrong/Qwen3.5-4B-Claude-4.6-Opus-Reasoning-Distilled",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-09T03:29:10Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mongol-editor-llm-v1
This model was trained from scratch on the None dataset.
## Model description
More information needed
## ... | [] |
zillioncart3930/phi-2-GGUF | zillioncart3930 | 2026-02-16T04:51:04Z | 76 | 0 | null | [
"gguf",
"phi-msft",
"nlp",
"code",
"text-generation",
"en",
"base_model:microsoft/phi-2",
"base_model:quantized:microsoft/phi-2",
"license:other",
"region:us"
] | text-generation | 2026-02-16T04:51:03Z | <!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content:... | [] |
JansherMughal/SmolVLM2-500M-Video-Instruct-basketball | JansherMughal | 2025-10-05T22:08:17Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"smolvlm",
"image-text-to-text",
"generated_from_trainer",
"conversational",
"base_model:HuggingFaceTB/SmolVLM2-500M-Video-Instruct",
"base_model:finetune:HuggingFaceTB/SmolVLM2-500M-Video-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"r... | image-text-to-text | 2025-10-04T09:07:49Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SmolVLM2-500M-Video-Instruct-basketball
This model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-500M-Video-Instruct](https:... | [] |
VorArt/smolvla_test_finetune | VorArt | 2025-09-13T18:11:14Z | 1 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:VorArt/hermesbot_dataset",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-09-13T18:11:05Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
mradermacher/NuMarkdown-8B-Thinking-i1-GGUF | mradermacher | 2026-01-01T02:12:59Z | 237 | 6 | transformers | [
"transformers",
"gguf",
"OCR",
"vision-language",
"VLM",
"Reasoning",
"document-to-markdown",
"qwen2.5",
"markdown",
"extraction",
"RAG",
"en",
"base_model:numind/NuMarkdown-8B-Thinking",
"base_model:quantized:numind/NuMarkdown-8B-Thinking",
"license:mit",
"endpoints_compatible",
"re... | null | 2025-08-07T10:05:31Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
rbelanec/train_cb_101112_1760637985 | rbelanec | 2025-10-19T22:55:06Z | 2 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"llama-factory",
"transformers",
"text-generation",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | text-generation | 2025-10-19T22:50:23Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_cb_101112_1760637985
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/met... | [] |
botp/Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive | botp | 2026-03-16T12:44:05Z | 235 | 0 | null | [
"gguf",
"uncensored",
"qwen3.5",
"moe",
"vision",
"multimodal",
"image-text-to-text",
"en",
"zh",
"multilingual",
"base_model:Qwen/Qwen3.5-35B-A3B",
"base_model:quantized:Qwen/Qwen3.5-35B-A3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2026-03-16T12:44:04Z | # Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive
Qwen3.5-35B-A3B uncensored by HauhauCS. **0/465 refusals.**
## About
No changes to datasets or capabilities. Fully functional, 100% of what the original authors intended - just without the refusals.
These are meant to be the best lossless uncensored models out there.... | [] |
livles/sme-15ksamples | livles | 2025-11-12T21:40:20Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/byt5-small",
"base_model:finetune:google/byt5-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-11-12T21:18:54Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sme-15ksamples
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on an unknown ... | [] |
mradermacher/Mistral3.2-Instruct-24B-Residual-GGUF | mradermacher | 2025-09-22T08:26:37Z | 27 | 0 | transformers | [
"transformers",
"gguf",
"en",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-22T04:08:14Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
clarin-pl/combo-nlp-xlm-roberta-base-slovak-snk-ud2.17 | clarin-pl | 2026-04-10T07:28:24Z | 0 | 0 | null | [
"pytorch",
"dependency-parsing",
"combo",
"universal-dependencies",
"token-classification",
"sk",
"dataset:universal_dependencies",
"license:cc-by-sa-4.0",
"region:us"
] | token-classification | 2026-04-10T07:03:49Z | # COMBO-NLP Model for Slovak
## Model Description
This is a Slovak-language model based on [COMBO-NLP](https://gitlab.clarin-pl.eu/syntactic-tools/combo-nlp), an open-source natural language preprocessing system. It performs:
- sentence segmentation (via [LAMBO](https://gitlab.clarin-pl.eu/syntactic-tools/lambo))
- ... | [] |
v9ai/salescue-entities-v1 | v9ai | 2026-04-07T10:02:58Z | 0 | 0 | salescue | [
"salescue",
"sales",
"entities",
"sales-intelligence",
"b2b",
"pytorch",
"token-classification",
"en",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"region:us"
] | token-classification | 2026-04-07T09:40:56Z | # SalesCue — entities
EntityExtractor module from the
[SalesCue](https://github.com/v9ai/ai-apps) sales intelligence library.
> **Status**: `untrained` — architecture only, random initialization. Use as a starting point for fine-tuning.
## Research Contribution
**Regex + Pointer NER with Re-typing**
Hybrid entity ... | [] |
arithmetic-circuit-overloading/Llama-3.3-70B-Instruct-3d-500K-50K-0.2-reverse-padzero-plus-mul-sub-99-512D-3L-8H-2048I | arithmetic-circuit-overloading | 2026-02-27T04:17:40Z | 258 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:finetune:meta-llama/Llama-3.3-70B-Instruct",
"license:llama3.3",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-27T03:56:51Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.3-70B-Instruct-3d-500K-50K-0.2-reverse-padzero-plus-mul-sub-99-512D-3L-8H-2048I
This model is a fine-tuned version of [me... | [] |
Tachyeon/orpheus-3b-conversational-grpo | Tachyeon | 2026-02-18T18:51:48Z | 4 | 0 | peft | [
"peft",
"safetensors",
"tts",
"text-to-speech",
"orpheus",
"grpo",
"lora",
"conversational-ai",
"speech-synthesis",
"reinforcement-learning",
"en",
"dataset:ylacombe/expresso",
"arxiv:2412.02612",
"base_model:canopylabs/orpheus-3b-0.1-ft",
"base_model:adapter:canopylabs/orpheus-3b-0.1-ft... | text-to-speech | 2026-02-18T18:51:45Z | # Orpheus 3B — GRPO LoRA for Conversational TTS
LoRA adapter trained with Group Relative Policy Optimization (GRPO) on [Orpheus 3B](https://huggingface.co/canopylabs/orpheus-3b-0.1-ft) for conversational speech synthesis.
## What is Orpheus?
[Orpheus](https://github.com/canopylabs/orpheus-tts) is a 3B parameter LLM-... | [
{
"start": 111,
"end": 115,
"text": "GRPO",
"label": "training method",
"score": 0.7236072421073914
},
{
"start": 590,
"end": 594,
"text": "GRPO",
"label": "training method",
"score": 0.7549077868461609
},
{
"start": 911,
"end": 915,
"text": "GRPO",
"l... |
pravsels/pi05-bin-pack-single-dataset | pravsels | 2026-03-25T08:05:24Z | 0 | 0 | null | [
"robotics",
"pi0",
"bin-packing",
"openpi",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-25T08:02:30Z | # pi0.5 Bin Pack — Single Dataset Baseline
Fine-tuned [pi0.5](https://github.com/Physical-Intelligence/openpi) checkpoint for coffee capsule bin packing, trained on a single dataset of ~200 teleoperated episodes. This serves as the base checkpoint for the reward recap experiments.
## Config
- **Config name:** `pi05_... | [] |
mradermacher/Muse-4b-GGUF | mradermacher | 2026-01-12T01:47:21Z | 77 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:bolshyC/Muse-4b",
"base_model:quantized:bolshyC/Muse-4b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-12T01:01:37Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
alesiaivanova/Llama-3B-GRPO-new-1-sub-main-2-sub-1024-3-sub-1280-v42 | alesiaivanova | 2025-09-25T10:44:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2025-09-25T10:40:39Z | # Model Card for Llama-3B-GRPO-new-1-sub-main-2-sub-1024-3-sub-1280-v42
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, bu... | [
{
"start": 907,
"end": 911,
"text": "GRPO",
"label": "training method",
"score": 0.7235086560249329
},
{
"start": 1202,
"end": 1206,
"text": "GRPO",
"label": "training method",
"score": 0.7401629090309143
}
] |
sabshr/biogpt-finetuned-ncbi-ner | sabshr | 2026-04-02T22:28:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"biogpt",
"token-classification",
"generated_from_trainer",
"dataset:ncbi_disease",
"base_model:microsoft/biogpt",
"base_model:finetune:microsoft/biogpt",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | token-classification | 2026-04-02T18:04:15Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biogpt-finetuned-ncbi-ner
This model is a fine-tuned version of [microsoft/biogpt](https://huggingface.co/microsoft/biogpt) on th... | [] |
hang2020/DeepSeek-V4-Pro | hang2020 | 2026-04-24T08:05:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"deepseek_v4",
"text-generation",
"license:mit",
"endpoints_compatible",
"8-bit",
"fp8",
"region:us"
] | text-generation | 2026-04-24T08:05:29Z | # DeepSeek-V4: Towards Highly Efficient Million-Token Context Intelligence
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" w... | [] |
Beast212004/nlp-sentiment-models | Beast212004 | 2026-03-17T05:21:30Z | 143 | 0 | keras | [
"keras",
"sentiment-analysis",
"nlp",
"pytorch",
"tensorflow",
"amazon-reviews",
"license:mit",
"region:us"
] | null | 2026-03-17T03:56:08Z | # NLP Sentiment Analysis Models
Pre-trained deep learning models for sentiment analysis on Amazon product reviews.
## Models Included
- **CNN (TensorFlow)**: Convolutional Neural Network with learned embeddings
- **CNN + GloVe (TensorFlow)**: CNN with pre-trained GloVe embeddings
- **CNN (PyTorch)**: Convolutional N... | [
{
"start": 246,
"end": 249,
"text": "CNN",
"label": "training method",
"score": 0.7289089560508728
},
{
"start": 387,
"end": 390,
"text": "CNN",
"label": "training method",
"score": 0.7151296138763428
}
] |
hoborific/WeirdCompound-v1.6-24b-W4A16-AutoRound | hoborific | 2025-12-11T03:01:55Z | 0 | 0 | null | [
"safetensors",
"mistral",
"base_model:FlareRebellion/WeirdCompound-v1.6-24b",
"base_model:quantized:FlareRebellion/WeirdCompound-v1.6-24b",
"4-bit",
"auto-round",
"region:us"
] | null | 2025-12-11T02:52:56Z | # 4-bit quant using [Intel AutoRound](https://github.com/intel/auto-round/)
This is the default W4A16 scheme.
# Quantization Details
at time of quantization, the default implied values not listed in the below json are as follows:
```json
{"batch_size": 8, "iters": 200, "seqlen": 2048, "nsamples": 128, "lr": None}
`... | [] |
hemitpatel/political_concept_classifer | hemitpatel | 2025-11-23T22:21:47Z | 0 | 0 | null | [
"safetensors",
"political-analysis",
"text-classification",
"LoRA",
"Phi-3",
"en",
"dataset:custom-political-text",
"license:apache-2.0",
"region:us"
] | text-classification | 2025-11-23T21:50:16Z | # Political Concept Classifier (Phi-3 LoRA)
A **LoRA-fine-tuned Phi-3 model** for classifying political text excerpts into key concepts. This model is designed to help analyze political content in articles, speeches, social media, or other text sources.
---
## **Model Overview**
* **Base model:** Phi-3
* **Fine-tun... | [
{
"start": 334,
"end": 338,
"text": "LoRA",
"label": "training method",
"score": 0.8208303451538086
},
{
"start": 1132,
"end": 1136,
"text": "LoRA",
"label": "training method",
"score": 0.81320720911026
}
] |
SL-AI/GRaPE-2-Pro_GGUF | SL-AI | 2026-04-20T23:33:47Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"reasoning",
"thinking_modes",
"qwen3",
"grape",
"vision",
"multimodal",
"instruct",
"chat",
"coding",
"math",
"science",
"image-text-to-text",
"en",
"zh",
"fr",
"de",
"es",
"ja",
"ko",
"pt",
"ru",
"ar",
"base_model:Qwen/Qwen3.5-27B",
"base_m... | image-text-to-text | 2026-04-19T20:28:55Z | 
_The **G**eneral **R**easoning **A**gent (for) **P**roject **E**xploration_
# The GRaPE 2 Family
| Model | Size | Modalities | Domain |
| :--- | :--- | :--- | :--- |
| **GRaPE 2 Pro** | 27B | Im... | [] |
Tc-13/resnet18.fb_swsl_ig1b_ft_in1k | Tc-13 | 2026-04-13T06:55:52Z | 0 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"transformers",
"arxiv:1905.00546",
"arxiv:1512.03385",
"license:cc-by-nc-4.0",
"region:us"
] | image-classification | 2026-04-13T06:55:52Z | # Model card for resnet18.fb_swsl_ig1b_ft_in1k
A ResNet-B image classification model.
This model features:
* ReLU activations
* single layer 7x7 convolution with pooling
* 1x1 convolution shortcut downsample
Pretrained on Instagram-1B hashtags dataset using semi-weakly supervised learning and fine-tuned on ImageN... | [] |
ctaguchi/ssc-ukv-mms-model-mix-adapt-max3 | ctaguchi | 2025-12-12T02:16:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-12-11T11:16:12Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ssc-ukv-mms-model-mix-adapt-max3
This model was trained from scratch on an unknown dataset.
It achieves the following results on ... | [] |
Sixym3/act-20ep-t8 | Sixym3 | 2025-12-10T04:22:16Z | 1 | 1 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:Sixym3/so100-block-positioning-cubicle-50ep-act-t8",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-10T04:22:07Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
rkumar1999/Phi-mini-MoE-Prover-openr1-distill-SFT | rkumar1999 | 2025-10-19T17:03:49Z | 24 | 1 | transformers | [
"transformers",
"safetensors",
"phimoe",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"custom_code",
"dataset:rkumar1999/DeepSeek-Prover-V2-chat-cleaned",
"base_model:microsoft/Phi-mini-MoE-instruct",
"base_model:finetune:microsoft/Phi-mini-MoE-inst... | text-generation | 2025-10-19T07:21:48Z | # Model Card for Phi-mini-MoE-Prover-openr1-distill-SFT
This model is a fine-tuned version of [microsoft/Phi-mini-MoE-instruct](https://huggingface.co/microsoft/Phi-mini-MoE-instruct) on the [rkumar1999/DeepSeek-Prover-V2-chat-cleaned](https://huggingface.co/datasets/rkumar1999/DeepSeek-Prover-V2-chat-cleaned) dataset... | [] |
TAUR-dev/M-RC-ab_sft_bon_corr_samples-sft | TAUR-dev | 2025-09-18T21:45:25Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"region:us"
] | null | 2025-09-18T21:44:53Z | # M-RC-ab_sft_bon_corr_samples-sft
This model was created as part of the **RC-ab_sft_bon_corr_samples** experiment using the SkillFactory experiment management system.
## Model Details
- **Training Method**: LLaMAFactory SFT (Supervised Fine-Tuning)
- **Stage Name**: sft
- **Experiment**: RC-ab_sft_bon_corr_samples
... | [
{
"start": 271,
"end": 274,
"text": "sft",
"label": "training method",
"score": 0.8172110915184021
},
{
"start": 438,
"end": 441,
"text": "sft",
"label": "training method",
"score": 0.7851362824440002
}
] |
BAAI/OpenSeek-Small-v1-Baseline | BAAI | 2025-09-08T11:35:54Z | 97 | 6 | null | [
"safetensors",
"deepseek_v3",
"custom_code",
"license:open-mdw",
"region:us"
] | null | 2025-05-22T06:13:26Z | # OpenSeek-Small-v1-Baseline Model Documentation
## Overview
We sampled 100 billion tokens from the CCI4.0 dataset and trained a 1.4B-parameter MoE model with 0.4B active parameters. This model, along with the dataset, is open-sourced as a baseline for future experiments in areas such as dataset construction, algorith... | [
{
"start": 340,
"end": 368,
"text": "parallel training frameworks",
"label": "training method",
"score": 0.7217230200767517
}
] |
treadon/granite-4.1-8b-Abliterated-AND-Disinhibited | treadon | 2026-05-01T13:04:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"granite",
"text-generation",
"abliteration",
"disinhibition",
"mechanistic-interpretability",
"conversational",
"base_model:ibm-granite/granite-4.1-8b",
"base_model:finetune:ibm-granite/granite-4.1-8b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"... | text-generation | 2026-05-01T12:55:33Z | # treadon/granite-4.1-8b-Abliterated-AND-Disinhibited
> Follow [**@treadon on X**](https://x.com/treadon) and [**treadon on Hugging Face**](https://huggingface.co/treadon) for more model-surgery experiments, evals, and AI projects.
A variant of [`ibm-granite/granite-4.1-8b`](https://huggingface.co/ibm-granite/granite... | [] |
mradermacher/Carnice-9B-SFT-Claude-Opus-Reasoning-Unsloth-GGUF | mradermacher | 2026-04-21T17:54:24Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma2",
"reasoning",
"peft",
"lora",
"fine-tuned",
"en",
"dataset:ermiaazarkhalili/Claude-Opus-Reasoning",
"base_model:ermiaazarkhalili/Carnice-9B-SFT-Claude-Opus-Reasoning-Unsloth",
"base_model:adapter:ermiaazarkhalili/Carnic... | null | 2026-04-21T10:15:46Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
jruffle/pca_transcriptome_32d | jruffle | 2026-01-10T15:07:40Z | 0 | 0 | null | [
"joblib",
"transcriptomics",
"dimensionality-reduction",
"pca",
"TRACERx",
"license:mit",
"region:us"
] | null | 2026-01-10T15:07:34Z | # PCA Model - transcriptome mode - 32D
Pre-trained pca model for transcriptomic data compression.
## Details
- **Mode**: transcriptome-centric compression
- **Dimensions**: 32
- **Training data**: TRACERx lung cancer transcriptomics
- **Created**: 2026-01-10T15:07:35.261279
## Usage
```python
import joblib
from hug... | [] |
pytorch/Phi-4-mini-instruct-AWQ-INT4 | pytorch | 2025-10-09T17:15:00Z | 270 | 3 | transformers | [
"transformers",
"pytorch",
"phi3",
"text-generation",
"torchao",
"phi",
"phi4",
"nlp",
"code",
"math",
"chat",
"conversational",
"custom_code",
"multilingual",
"arxiv:2306.00978",
"arxiv:2507.16099",
"base_model:microsoft/Phi-4-mini-instruct",
"base_model:quantized:microsoft/Phi-4-... | text-generation | 2025-08-28T00:01:17Z | This repository hosts the **Phi4-mini-instruct** model quantized with [torchao](https://huggingface.co/docs/transformers/main/en/quantization/torchao)
using int4 weight-only quantization and the [awq](https://arxiv.org/abs/2306.00978) algorithm.
This work is brought to you by the PyTorch team. This model can be used d... | [] |
Bogula/aPINKTUS | Bogula | 2025-11-26T16:33:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"apertus",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:swiss-ai/Apertus-8B-Instruct-2509",
"base_model:finetune:swiss-ai/Apertus-8B-Instruct-2509",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-26T08:31:37Z | # Model Card for apinktus
This model is a fine-tuned version of [swiss-ai/Apertus-8B-Instruct-2509](https://huggingface.co/swiss-ai/Apertus-8B-Instruct-2509).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time... | [] |
huangfeihong0526/SmolLM2-1.7B-medical-cpt-finetune | huangfeihong0526 | 2026-04-02T03:43:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"endpoints_compatible",
"region:us"
] | null | 2026-04-02T03:05:04Z | # Model Card for SmolLM2-1.7B-medical-cpt-finetune
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to th... | [] |
manancode/opus-mt-srn-en-ctranslate2-android | manancode | 2025-08-11T18:24:01Z | 0 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] | translation | 2025-08-11T18:23:38Z | # opus-mt-srn-en-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-srn-en` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-srn-en
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted ... | [] |
MihaiPopa-1/Stentor-30M-Instruct-heretic-safety-defiltered | MihaiPopa-1 | 2026-02-26T07:54:01Z | 41 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"small-language-model",
"efficient",
"edge-deployment",
"tiny-model",
"30m-parameters",
"safety-tuning",
"instruction-following",
"chat",
"lora",
"peft",
"beavertails",
"dolly",
"heretic",
"uncensored",
"decensored",
"a... | text-generation | 2026-02-26T07:28:28Z | # This is a decensored version of [StentorLabs/Stentor-30M-Instruct](https://huggingface.co/StentorLabs/Stentor-30M-Instruct), made using [Heretic](https://github.com/p-e-w/heretic) v1.2.0
## Abliteration parameters
| Parameter | Value |
| :-------- | :---: |
| **direction_index** | per layer |
| **attn.o_proj.max_we... | [] |
Yandjimadji/mental-health-lora-v2 | Yandjimadji | 2025-10-24T03:30:48Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"base_model:adapter:meta-llama/Llama-3.2-1B-Instruct",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"region:us"
] | text-generation | 2025-10-24T02:20:31Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mental-health-lora-v2
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/... | [] |
NathanRoll/writing-rlvr-qwen2.5-1.5b | NathanRoll | 2026-02-26T00:14:28Z | 498 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"grpo",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:... | text-generation | 2026-02-23T05:13:57Z | # Model Card for writing-rlvr-qwen2.5-1.5b
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a t... | [] |
khenzo/llamantino-anita-ilovequentin-lora | khenzo | 2025-12-23T08:47:05Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA",
"base_model:finetune:swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA",
"endpoints_compatible",
"region:us"
] | null | 2025-12-17T23:38:03Z | # Model Card for llamantino-anita-ilovequentin-lora
This model is a fine-tuned version of [swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA](https://huggingface.co/swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transform... | [] |
NumlockUknowSth/CineTrans-DiT | NumlockUknowSth | 2026-02-03T08:45:52Z | 0 | 2 | null | [
"multi-shot",
"text-to-video",
"en",
"dataset:NumlockUknowSth/Cine250K",
"arxiv:2508.11484",
"base_model:Wan-AI/Wan2.1-T2V-1.3B",
"base_model:finetune:Wan-AI/Wan2.1-T2V-1.3B",
"license:mit",
"region:us"
] | text-to-video | 2025-08-15T09:00:59Z | <div align="center">
<h1>CineTrans: Learning to Generate Videos with Cinematic Transitions via Masked Diffusion Models</h1>
[](https://uknowsth.github.io/CineTrans/) [.
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Document... | [] |
ebrasha/abdal-xss-ai-engine | ebrasha | 2025-12-07T12:08:42Z | 12 | 0 | keras | [
"keras",
"tf-keras",
"onnx",
"abdal",
"xss",
"hack",
"ebrasha",
"ai",
"tensorflow",
"text-classification",
"region:us"
] | text-classification | 2025-12-07T10:54:01Z | # Abdal XSS AI Engine
## 🎤 README Translation
- [English](README.md)
- [فارسی](README.fa.md)
<p align="center"><img src="scr.jpg?raw=true"></p>
## 💎 General purpose
The Abdal XSS AI Engine was developed to provide a free and advanced solution for combating XSS attacks, particularly in Iran, where there is a la... | [] |
Renedyn/bert-finetuned-ner | Renedyn | 2025-10-14T10:41:22Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:BAAI/bge-small-en-v1.5",
"base_model:finetune:BAAI/bge-small-en-v1.5",
"license:mit",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-10-13T21:46:34Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) ... | [] |
powerpuff-luv/bert-sst2-sentiment-classifier | powerpuff-luv | 2026-01-09T23:40:21Z | 0 | 0 | null | [
"safetensors",
"bert",
"en",
"dataset:stanfordnlp/sst2",
"license:mit",
"region:us"
] | null | 2026-01-09T15:53:12Z | ---
license: mit
datasets:
- stanfordnlp/sst2
language:
- en
---
# BERT for SST-2 Sentiment Classification
Этот репозиторий содержит дообученный модельный чекпоинт `bert-base-uncased` для задачи бинарной классификации тональности на датасете SST-2 из набора GLUE. [file:1]
## Модель
- Базовая архитектура: `bert-base-... | [] |
haohw/vla_real_pk_remove_sharp_gradgate_step400 | haohw | 2026-05-04T08:17:29Z | 0 | 0 | null | [
"region:us"
] | null | 2026-05-04T07:53:03Z | # vla_real_pk_remove_sharp_gradgate_step400
Edited pi0.5 VLA checkpoint for **pass_knife** task — `pk_remove_sharp_gradgate` arm at step 400.
## Deployment goal
Remove unsafe 'sharp' (no-rotation) handoff behavior.
## Edit recipe
- **steering_mode**: hidden_v9_mc_softhybrid_precommit_gated
- **target_subset**: {0:... | [] |
faresfawzi/Llama-3.2-3B-SCRIBE | faresfawzi | 2025-10-01T09:15:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"education",
"tool-calling",
"reasoning",
"feedback",
"low-resource",
"lora",
"conversational",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-27T15:12:46Z | # Model Card for Llama-3.2-3B-SCRIBE
## Abstract
Language models can be used to provide interactive, personalized student feedback in educational settings. However, real-world deployment faces three key challenges: privacy concerns, limited computational resources, and the need for pedagogically valid responses. Thes... | [] |
ABDUL-HASEEB-TANOLI/HAIDER-Math-32B-v1 | ABDUL-HASEEB-TANOLI | 2026-04-30T12:11:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-30T11:58:15Z | # haider-math-32b-lam05
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using /home/azureuser/haider_project/models/qwen-32b a... | [] |
mahdisf/ppo-CartPole-v1 | mahdisf | 2025-08-04T07:51:46Z | 0 | 0 | null | [
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2025-08-04T06:15:48Z | # PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'num_envs': 4
'num_steps': 128
'learning_rate': 0.00025
'gamma': 0.99
'gae_lambda': 0.95
'clip_coef': 0.2
'update_epochs': 4
'num_m... | [] |
mradermacher/Llama3_3-Nemo-Super-Writer-49B-GGUF | mradermacher | 2026-04-11T02:06:09Z | 1,259 | 1 | transformers | [
"transformers",
"gguf",
"nvidia",
"llama-3",
"pytorch",
"en",
"dataset:ConicCat/Gutenberg-SFT",
"dataset:ConicCat/Condor-SFT-Filtered",
"base_model:ConicCat/Llama3_3-Nemo-Super-Writer-49B",
"base_model:quantized:ConicCat/Llama3_3-Nemo-Super-Writer-49B",
"license:apache-2.0",
"endpoints_compati... | null | 2026-04-01T22:43:42Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
varjosoft/Qwen3.6-35B-A3B-TQ-apex2 | varjosoft | 2026-04-20T12:40:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5_moe",
"image-text-to-text",
"turboquant",
"tq3",
"tq4",
"higgs",
"compressed",
"quantized",
"moe",
"mixed-precision",
"native-checkpoint",
"mlx",
"qwen3",
"apex",
"kurtosis-aware",
"text-generation",
"conversational",
"base_model:Qwen/Qwe... | text-generation | 2026-04-20T11:20:56Z | # Qwen3.6-35B-A3B-TQ-apex2
**Data-driven mixed-precision native TurboQuant checkpoint** of [`Qwen/Qwen3.6-35B-A3B`](https://huggingface.co/Qwen/Qwen3.6-35B-A3B). Same recipe as `varjosoft/Qwen3.6-35B-A3B-TQ-apex` but additionally **skips compression** on the small, critical tensors our per-tensor kurtosis scan flagged... | [] |
pandoradox/qwen2.5-3b-instruct_stressstrain_150 | pandoradox | 2025-09-25T03:12:43Z | 1 | 0 | null | [
"safetensors",
"qwen2",
"qwen",
"instruct",
"stressstrain",
"3b",
"fine-tuned",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T00:56:25Z | # pandoradox/qwen2.5-3b-instruct_stressstrain_150
This is a fine-tuned version of Qwen/Qwen2.5-3B-Instruct on the stressstrain dataset.
## Model Details
- **Base Model**: Qwen/Qwen2.5-3B-Instruct
- **Dataset**: stressstrain
- **Model Size**: 3b
- **Checkpoint**: 150
- **Training Method**: LoRA (Low-Rank Adaptation)
... | [] |
NickolasLow1/sft-trip-planning | NickolasLow1 | 2026-04-29T10:55:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"unsloth",
"trl",
"endpoints_compatible",
"region:us"
] | null | 2026-04-29T10:54:51Z | # Model Card for outputs
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7b-Instruct-bnb-4bit](https://huggingface.co/unsloth/Qwen2.5-Coder-7b-Instruct-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "... | [] |
xhaka3456/flowmatching_openarm_foldingtowel_final_fold | xhaka3456 | 2026-03-06T10:06:50Z | 33 | 0 | lerobot | [
"lerobot",
"safetensors",
"flowmatching",
"robotics",
"dataset:xhaka3456/openarm_foldingtowel_final_fold",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-06T10:05:52Z | # Model Card for flowmatching
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggi... | [] |
hZzy/mistral-7b-expo-7b-IPO-25-08-try-1 | hZzy | 2025-08-05T22:07:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"expo",
"trl",
"arxiv:2305.18290",
"base_model:hZzy/mistral-7b-sft-25-1",
"base_model:finetune:hZzy/mistral-7b-sft-25-1",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T13:16:40Z | # Model Card for mistral-7b-expo-7b-IPO-25-08-try-1
This model is a fine-tuned version of [hZzy/mistral-7b-sft-25-1](https://huggingface.co/hZzy/mistral-7b-sft-25-1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you ha... | [
{
"start": 195,
"end": 198,
"text": "TRL",
"label": "training method",
"score": 0.7776343822479248
},
{
"start": 968,
"end": 971,
"text": "DPO",
"label": "training method",
"score": 0.8083157539367676
},
{
"start": 1264,
"end": 1267,
"text": "DPO",
"la... |
mradermacher/Llama-3.1-Argunaut-1-8B-HIRPO-GGUF | mradermacher | 2025-09-18T11:00:36Z | 22 | 1 | transformers | [
"transformers",
"gguf",
"logic",
"argumentation",
"critical-thinking",
"argument-mapping",
"generated_from_trainer",
"trl",
"rlvr",
"hirpo",
"en",
"dataset:DebateLabKIT/arguments-and-debates",
"base_model:DebateLabKIT/Llama-3.1-Argunaut-1-8B-HIRPO",
"base_model:quantized:DebateLabKIT/Llama... | null | 2025-09-18T09:15:52Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
mradermacher/orpheus-tts-3b-zh-finetuned-GGUF | mradermacher | 2025-08-27T11:55:54Z | 127 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:steven8274/orpheus-tts-3b-zh-finetuned",
"base_model:quantized:steven8274/orpheus-tts-3b-zh-finetuned",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-27T11:22:33Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
mradermacher/Qwen3-4B-Instruct-2507-i1-GGUF | mradermacher | 2025-12-09T03:16:31Z | 195 | 3 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:quantized:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-15T21:46:12Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
Qwen/Qwen2.5-32B | Qwen | 2024-09-20T07:58:03Z | 1,666,889 | 174 | null | [
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"arxiv:2407.10671",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-09-15T12:18:33Z | # Qwen2.5-32B
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** ... | [
{
"start": 1184,
"end": 1195,
"text": "Pretraining",
"label": "training method",
"score": 0.8381235599517822
},
{
"start": 1601,
"end": 1612,
"text": "pretraining",
"label": "training method",
"score": 0.722898542881012
}
] |
QuantStack/Wan2.2-Fun-A14B-Control-Camera-GGUF | QuantStack | 2025-08-15T22:48:01Z | 1,024 | 11 | gguf | [
"gguf",
"text-to-video",
"en",
"zh",
"base_model:alibaba-pai/Wan2.2-Fun-A14B-Control-Camera",
"base_model:quantized:alibaba-pai/Wan2.2-Fun-A14B-Control-Camera",
"license:apache-2.0",
"region:us"
] | text-to-video | 2025-08-13T15:25:09Z | This GGUF file is a direct conversion of [alibaba-pai/Wan2.2-Fun-A14B-Control-Camera](https://huggingface.co/alibaba-pai/Wan2.2-Fun-A14B-Control-Camera)
Type | Name | Location | Download
| ------------ | ---------------------------------... | [] |
Nekochu/nanochat-d24 | Nekochu | 2026-04-13T18:49:19Z | 923 | 0 | null | [
"safetensors",
"nanochat",
"text-generation",
"karpathy",
"single-gpu",
"rtx-5090",
"conversational",
"en",
"dataset:HuggingFaceFW/finepdfs_edu_50BT-dclm_30BT-fineweb_edu_20BT-shuffled",
"dataset:HuggingFaceTB/smol-smoltalk",
"dataset:cais/mmlu",
"dataset:ai2_arc",
"dataset:openai/gsm8k",
... | text-generation | 2026-03-24T12:32:34Z | # nanochat-d24
Train an LLM from scratch 1.38B param, Pretrain+SFT+RL (GRPO/GSM8K) on a **single RTX 5090** (32GB). Ported from [Karpathy's nanochat](https://github.com/karpathy/nanochat) ([`6ed7d1d`](https://github.com/karpathy/nanochat/commit/6ed7d1d82cee16c2e26f45d559ad3338447a6c1b), Mar 9 2026) into a [single ~2K ... | [] |
deepakachu/Llama-3.2-1B-Instruct-stage-2-positive-unclean-1 | deepakachu | 2026-02-15T12:32:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:deepakachu/Llama-3.2-1B-stage-1-dispreference-tuning-gsm8k-1",
"base_model:finetune:deepakachu/Llama-3.2-1B-stage-1-dispreference-tuning-gsm8k-1",
"endpoints_compatible",
"region:us"
] | null | 2026-02-15T12:32:28Z | # Model Card for Llama-3.2-1B-Instruct-stage-2-positive-unclean-1
This model is a fine-tuned version of [deepakachu/Llama-3.2-1B-stage-1-dispreference-tuning-gsm8k-1](https://huggingface.co/deepakachu/Llama-3.2-1B-stage-1-dispreference-tuning-gsm8k-1).
It has been trained using [TRL](https://github.com/huggingface/trl... | [] |
shunshun1/record-testpi05v4model | shunshun1 | 2026-03-12T13:50:00Z | 32 | 0 | lerobot | [
"lerobot",
"safetensors",
"pi05",
"robotics",
"dataset:shunshun1/record-testpi05v4",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-12T13:47:14Z | # Model Card for pi05
<!-- Provide a quick summary of what the model is/does. -->
**π₀.₅ (Pi05) Policy**
π₀.₅ is a Vision-Language-Action model with open-world generalization, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀.₅ repres... | [] |
mradermacher/kappa-20b-131k-GGUF | mradermacher | 2026-03-03T09:55:29Z | 4,229 | 0 | transformers | [
"transformers",
"gguf",
"mixture-of-experts",
"moe",
"long-context",
"fine-tuning",
"sft",
"persona",
"multi-turn",
"tool-calling",
"torchtitan",
"en",
"base_model:eousphoros/kappa-20b-131k",
"base_model:quantized:eousphoros/kappa-20b-131k",
"license:other",
"endpoints_compatible",
"... | null | 2026-03-02T03:26:39Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: MXFP4_MOE x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->... | [] |
wikilangs/zu | wikilangs | 2026-01-11T06:02:47Z | 0 | 0 | wikilangs | [
"wikilangs",
"nlp",
"tokenizer",
"embeddings",
"n-gram",
"markov",
"wikipedia",
"feature-extraction",
"sentence-similarity",
"tokenization",
"n-grams",
"markov-chain",
"text-mining",
"fasttext",
"babelvec",
"vocabulous",
"vocabulary",
"monolingual",
"family-bantu_southern",
"te... | text-generation | 2026-01-11T06:02:31Z | # Zulu - Wikilangs Models
## Comprehensive Research Report & Full Ablation Study
This repository contains NLP models trained and evaluated by Wikilangs, specifically on **Zulu** Wikipedia data.
We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and word embeddings.
## 📋 Repository Contents
... | [
{
"start": 1288,
"end": 1309,
"text": "Tokenizer Compression",
"label": "training method",
"score": 0.7051144242286682
}
] |
mradermacher/Qwen2.5-32B-Instruct-GGUF | mradermacher | 2025-12-14T21:30:45Z | 76 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"en",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-32B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-14T19:41:44Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
jestevesv/distilbert-base-uncased-distilled-squad | jestevesv | 2025-09-12T21:30:18Z | 0 | 0 | transformers.js | [
"transformers.js",
"onnx",
"distilbert",
"question-answering",
"quantized",
"english",
"license:other",
"region:us"
] | question-answering | 2025-09-12T21:22:40Z | # distilbert-base-spanish-uncased-finetuned-qa-mlqa-onnx
## Model Description
English DistilBERT model fine-tuned for question answering on the MLQA dataset, exported to ONNX and quantized for use with Transformers.js.
## Files
- `config.json`
- `tokenizer.json`
- `tokenizer_config.json`
- `onnx/model_quantized.onnx`... | [] |
contemmcm/de83cf709744250dc83b322e48468e47 | contemmcm | 2025-11-22T06:06:59Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-classification",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-22T05:44:41Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# de83cf709744250dc83b322e48468e47
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/... | [
{
"start": 529,
"end": 537,
"text": "F1 Macro",
"label": "training method",
"score": 0.7068677544593811
}
] |
AngelSlim/Qwen3-4B_eagle3 | AngelSlim | 2026-01-13T06:46:32Z | 664 | 4 | null | [
"safetensors",
"llama",
"qwen3",
"eagle3",
"eagle",
"arxiv:2509.24248",
"arxiv:2509.23809",
"region:us"
] | null | 2025-07-11T07:03:09Z | <p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://github.com/Tencent/AngelSlim/blob/main/docs/source/assets/logos/angelslim_logo_light.png?raw=true">
<img alt="AngelSlim" src="https://github.com/Tencent/AngelSlim/blob/main/docs/source/assets/logos/angelslim_logo.png?raw... | [] |
ellisdoro/bcgo-all-MiniLM-L6-v2_cross_attention_gcn_h512_o64_cosine_e1024_early-on2vec-koji-early | ellisdoro | 2025-09-19T09:11:10Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"ontology",
"on2vec",
"graph-neural-networks",
"base-all-MiniLM-L6-v2",
"general",
"general-ontology",
"fusion-cross_attention",
"gnn-gcn",
"medium-ontology",
"license:apache-2.0",
"text-embedd... | sentence-similarity | 2025-09-19T09:11:04Z | # bcgo_all-MiniLM-L6-v2_cross_attention_gcn_h512_o64_cosine_e1024_early
This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks.
## Model Details
- **Base Text Model**: all-MiniLM-L6-v2
-... | [
{
"start": 496,
"end": 511,
"text": "cross_attention",
"label": "training method",
"score": 0.7400226593017578
}
] |
Qwen/Qwen2-7B-Instruct-GGUF | Qwen | 2024-08-21T10:28:11Z | 10,661 | 179 | null | [
"gguf",
"chat",
"text-generation",
"en",
"base_model:Qwen/Qwen2-7B-Instruct",
"base_model:quantized:Qwen/Qwen2-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-06-06T13:18:05Z | # Qwen2-7B-Instruct-GGUF
## Introduction
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 7B Qwen... | [] |
mradermacher/GutenOCR-7B-GGUF | mradermacher | 2026-03-12T21:30:43Z | 311 | 0 | transformers | [
"transformers",
"gguf",
"ocr",
"vision",
"qwen2.5-vl",
"pdf",
"document-understanding",
"en",
"base_model:rootsautomation/GutenOCR-7B",
"base_model:quantized:rootsautomation/GutenOCR-7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-23T14:20:22Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
qoranet/QOR-TTS-0.6B | qoranet | 2026-02-22T18:55:50Z | 2 | 0 | null | [
"safetensors",
"qwen3_tts",
"tts",
"voice-cloning",
"text-to-speech",
"qor",
"en",
"zh",
"ja",
"ko",
"de",
"fr",
"es",
"pt",
"ru",
"it",
"base_model:Qwen/Qwen3-TTS-12Hz-0.6B-Base",
"base_model:finetune:Qwen/Qwen3-TTS-12Hz-0.6B-Base",
"license:apache-2.0",
"region:us"
] | text-to-speech | 2026-02-22T18:52:28Z | # QOR-TTS-0.6B
QOR-TTS 0.6B — Fast local voice cloning (~2 GB)
## About
QOR-TTS is a voice cloning text-to-speech model, part of the [QOR AI system](https://github.com/QorAI/qor).
It enables local, offline voice cloning — record a short voice sample and generate speech in that voice.
This model is based on **Qwen/Q... | [] |
Inceptive/ROLEPL-AI-v2-Qwen2.5-32B | Inceptive | 2025-08-26T11:33:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"roleplay",
"conversational",
"en",
"arxiv:2412.15115",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"license:cc-by-nc-sa-4.0",
"text-generation-inference",
"endpoints_compatible",
"regio... | text-generation | 2025-08-26T09:13:46Z | <p align="center">
<img src="https://static.wixstatic.com/media/7986d1_f4bd2d625d2d414d97aad202f8ec7643~mv2.png/v1/fill/w_413,h_110,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/Fichier%2015%201.png" width="400"/>
<p>
<p align="center">
  🤖 <a href="https://www.rolepl-ai.com/">Project websit... | [
{
"start": 467,
"end": 476,
"text": "ROLEPL-AI",
"label": "training method",
"score": 0.8296325206756592
},
{
"start": 479,
"end": 488,
"text": "ROLEPL-AI",
"label": "training method",
"score": 0.8401797413825989
},
{
"start": 1205,
"end": 1214,
"text": "R... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.