modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
mradermacher/AlgoMind-1.2B-GGUF | mradermacher | 2026-02-01T12:03:58Z | 33 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"text-generation-inference",
"instruction-tuned",
"distilled",
"synthetic-data",
"unsloth",
"lfm2",
"glm",
"agentic",
"edge",
"efficient",
"en",
"dataset:Open-Orca/FLAN",
"dataset:databricks/databricks-dolly-15k",
"dataset:OpenAssistant/oass... | text-generation | 2026-02-01T09:50:46Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
swadhindas324/googlenet-vit | swadhindas324 | 2026-02-18T03:33:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"googlenet_vit",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2026-02-18T03:33:05Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# googlenet-vit
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
Mor... | [] |
surya-ravindra/Llama3.2-1B-Med-Transcript-Notes-Q4_K_M-GGUF | surya-ravindra | 2025-08-16T15:23:27Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:starfishdata/playground_endocronology_notes_1500",
"base_model:GetSoloTech/Llama3.2-1B-Med-Transcript-Notes",
"base_model:quantized:GetSoloTech/Llama3.2-1B-Med-Transcript-Notes",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-16T15:23:19Z | # surya-ravindra/Llama3.2-1B-Med-Transcript-Notes-Q4_K_M-GGUF
This model was converted to GGUF format from [`GetSoloTech/Llama3.2-1B-Med-Transcript-Notes`](https://huggingface.co/GetSoloTech/Llama3.2-1B-Med-Transcript-Notes) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my... | [] |
Muapi/50s-noir-movie | Muapi | 2025-08-16T14:55:45Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-16T14:55:30Z | # 50s Noir Movie

**Base model**: Flux.1 D
**Trained words**: 50s Noir Movie Still
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Co... | [] |
mradermacher/Bell-LLM-20B-Reasoning-GGUF | mradermacher | 2026-03-05T01:03:34Z | 833 | 0 | transformers | [
"transformers",
"gguf",
"telecom",
"telecommunications",
"gsma",
"fine-tuned",
"en",
"base_model:farbodtavakkoli/OTel-LLM-20B-Reasoning",
"base_model:quantized:farbodtavakkoli/OTel-LLM-20B-Reasoning",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-02-20T22:08:49Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
ferrazzipietro/ULS-MultiClinNERsv-Qwen2.5-14B-disease | ferrazzipietro | 2026-03-15T13:30:19Z | 87 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2.5-14B",
"lora",
"transformers",
"base_model:Qwen/Qwen2.5-14B",
"license:apache-2.0",
"region:us"
] | null | 2026-03-15T12:49:24Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ULS-MultiClinNERsv-Qwen2.5-14B-disease
This model is a fine-tuned version of [Qwen/Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2... | [] |
mradermacher/CreativeWriter-Llama3.2-3B-GGUF | mradermacher | 2025-11-16T17:50:13Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"en",
"dataset:theprint/AuthorsAssistant",
"base_model:theprint/CreativeWriter-Llama3.2-3B",
"base_model:quantized:theprint/CreativeWriter-Llama3.2-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conver... | null | 2025-11-16T17:02:47Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
NomikLee/distilbert-base-uncased-finetuned-cola | NomikLee | 2025-12-24T02:04:34Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"re... | text-classification | 2025-12-24T02:04:03Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/dis... | [
{
"start": 190,
"end": 228,
"text": "distilbert-base-uncased-finetuned-cola",
"label": "training method",
"score": 0.790026843547821
},
{
"start": 269,
"end": 292,
"text": "distilbert-base-uncased",
"label": "training method",
"score": 0.816415011882782
},
{
"star... |
AlignmentResearch/obfuscation-atlas-gemma-3-27b-it-kl0.001-det3-seed2-mbpp_probe | AlignmentResearch | 2026-02-20T21:59:21Z | 0 | 0 | peft | [
"peft",
"deception-detection",
"rlvr",
"alignment-research",
"obfuscation-atlas",
"lora",
"model-type:obfuscated-activations",
"arxiv:2602.15515",
"base_model:google/gemma-3-27b-it",
"base_model:adapter:google/gemma-3-27b-it",
"license:mit",
"region:us"
] | null | 2026-02-16T09:26:47Z | # RLVR-trained policy from The Obfuscation Atlas
This is a policy trained on MBPP-Honeypot with deception probes,
from the [Obfuscation Atlas paper](https://arxiv.org/abs/2602.15515),
uploaded for reproducibility and further research.
The training code and RL environment are available at: https://github.com/Alignment... | [] |
Edge-Quant/granite-4.0-micro-Q4_K_M-GGUF | Edge-Quant | 2025-11-28T17:16:03Z | 10 | 0 | transformers | [
"transformers",
"gguf",
"language",
"granite-4.0",
"llama-cpp",
"gguf-my-repo",
"base_model:ibm-granite/granite-4.0-micro",
"base_model:quantized:ibm-granite/granite-4.0-micro",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-11-28T17:15:50Z | # Edge-Quant/granite-4.0-micro-Q4_K_M-GGUF
This model was converted to GGUF format from [`ibm-granite/granite-4.0-micro`](https://huggingface.co/ibm-granite/granite-4.0-micro) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](... | [] |
uwcc/WoodenToyFood | uwcc | 2025-09-03T17:50:04Z | 32 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-09-03T03:19:46Z | # WoodenToyFood
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
<Gallery />
## Trigger words
You should use `WoodenToyFood` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safe... | [] |
maverickdelacruz/bert-phishing-classifier_teacher | maverickdelacruz | 2025-10-18T18:24:43Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-09-24T11:27:04Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-phishing-classifier_teacher
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/goo... | [] |
mradermacher/Midnight-Miqu-70B-v1.5-GGUF | mradermacher | 2024-12-04T13:49:07Z | 3,752 | 20 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:sophosympatheia/Midnight-Miqu-70B-v1.5",
"base_model:quantized:sophosympatheia/Midnight-Miqu-70B-v1.5",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-04T10:36:28Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.5
<!-- provided-files -->
weighted/imatrix quants are available at https://... | [] |
rkumagai/dpo-qwen-cot-merged | rkumagai | 2026-02-08T06:40:47Z | 30 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"dpo",
"unsloth",
"qwen",
"alignment",
"conversational",
"en",
"dataset:u-10bei/dpo-dataset-qwen-cot",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"text-gener... | text-generation | 2026-02-07T07:56:48Z | # my-qwen3-4b-dpo-qwen-cot-merged-ver1
This model is a fine-tuned version of **Qwen/Qwen3-4B-Instruct-2507** using **Direct Preference Optimization (DPO)** via the **Unsloth** library.
This repository contains the **full-merged 16-bit weights**. No adapter loading is required.
## Training Objective
This model has be... | [
{
"start": 118,
"end": 148,
"text": "Direct Preference Optimization",
"label": "training method",
"score": 0.8797093033790588
},
{
"start": 150,
"end": 153,
"text": "DPO",
"label": "training method",
"score": 0.8655135035514832
},
{
"start": 339,
"end": 342,
... |
daydreamlive/Wan2.1-T2V-14B | daydreamlive | 2026-02-25T14:20:16Z | 9 | 0 | diffusers | [
"diffusers",
"safetensors",
"t2v",
"daydream-scope",
"mirror",
"license:apache-2.0",
"region:us"
] | null | 2026-02-25T13:37:47Z | # Wan2.1-T2V-14B
Wan2.1 Text-to-Video 14B parameter model.
## About This Repo
This is a mirror of [Wan-AI/Wan2.1-T2V-14B](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B) maintained by [Daydream](https://github.com/daydreamlive) for use with [Scope](https://github.com/daydreamlive/scope).
All credit for the or... | [] |
kushaaagr/controlnet-dogpose-t_6K-epoch_1-lr_2e-6 | kushaaagr | 2025-12-13T18:49:38Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"base_model:Manojb/stable-diffusion-2-1-base",
"base_model:adapter:Manojb/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2025-12-13T15:33:57Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-kushaaagr/controlnet-dogpose-t_6K-epoch_1-lr_2e-6
These are controlnet weights trained on Manojb/stable-diffu... | [] |
hinge/danstral-v1 | hinge | 2025-11-24T10:27:43Z | 6 | 5 | peft | [
"peft",
"safetensors",
"speech-to-text",
"lora",
"danish",
"fine-tuned",
"voxtral",
"whisper",
"da",
"dataset:CoRal-project/coral",
"base_model:mistralai/Voxtral-Small-24B-2507",
"base_model:adapter:mistralai/Voxtral-Small-24B-2507",
"model-index",
"region:us"
] | null | 2025-09-19T13:28:26Z | # Voxtral-Small-24B LoRA Fine-tuned on CoRaL
**Danstral** is a state-of-the-art 24B parameter model for Danish automatic speech recognition (ASR). It combines the decoder and audio-adapter of [**Voxtral-Small-24B-2507**](https://huggingface.co/mistralai/Voxtral-Small-24B-2507) with the audio encoder from [**roest-whis... | [] |
furproxy/9b-35 | furproxy | 2026-04-09T03:14:49Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-04-09T02:01:12Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen35_caption_galore
This model is a fine-tuned version of [/workspace/models/Qwen3.5-9B](https://huggingface.co//workspace/mode... | [] |
AIencoder/Nanbeige4.1-3B-Q8_0-GGUF | AIencoder | 2026-03-05T14:58:03Z | 48 | 0 | transformers | [
"transformers",
"gguf",
"llm",
"nanbeige",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"zh",
"base_model:Nanbeige/Nanbeige4.1-3B",
"base_model:quantized:Nanbeige/Nanbeige4.1-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2026-03-05T14:57:43Z | # AIencoder/Nanbeige4.1-3B-Q8_0-GGUF
This model was converted to GGUF format from [`Nanbeige/Nanbeige4.1-3B`](https://huggingface.co/Nanbeige/Nanbeige4.1-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingfac... | [] |
C-L-V/mbart-neutralization | C-L-V | 2026-02-25T11:58:36Z | 27 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"simplification",
"generated_from_trainer",
"base_model:facebook/mbart-large-50",
"base_model:finetune:facebook/mbart-large-50",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2026-02-24T13:02:20Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-neutralization
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-... | [] |
Yashwanth1508/AgroAI-pest-detection | Yashwanth1508 | 2026-02-14T13:34:12Z | 20 | 0 | ultralytics | [
"ultralytics",
"object-detection",
"YOLO11s",
"pests",
"agriculture",
"ip102",
"dataset:IP102",
"license:mit",
"model-index",
"region:us"
] | object-detection | 2026-02-14T13:34:12Z | # 🐞 IP102 Pest Detector — YOLO11 Small
A custom YOLO11 object detection model trained on the **IP102** dataset — designed for pest detection in precision agriculture.
> **Model Purpose:** Detect and classify 102 pest species in real-time field conditions using computer vision.
---
## 💡 Model Details
- **Model:**... | [] |
LizardAPN/LunarLander-v2-with-ppo | LizardAPN | 2025-08-17T15:36:17Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2025-08-17T14:33:39Z | # PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
exp_name: ppo
seed: 1
torch_deterministic: True
cuda: True
track: False
wandb_project_name: cleanRL
wandb_entity: None
capture_video: False
env_id: LunarLander-v2
total... | [] |
psychopenguin/llama-major-project-Q4_K_M-GGUF | psychopenguin | 2026-01-08T19:19:09Z | 35 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:psychopenguin/llama-major-project",
"base_model:quantized:psychopenguin/llama-major-project",
"endpoints_compatible",
"region:us"
] | null | 2026-01-08T19:18:56Z | # psychopenguin/llama-major-project-Q4_K_M-GGUF
This model was converted to GGUF format from [`psychopenguin/llama-major-project`](https://huggingface.co/psychopenguin/llama-major-project) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original... | [] |
rookiezyp/Qwen2.5-1.5B-alpaca-cleaned-all-epochs2-20260312 | rookiezyp | 2026-03-12T13:25:38Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"endpoints_compatible",
"region:us"
] | null | 2026-03-12T08:57:29Z | # Model Card for Qwen2.5-1.5B-alpaca-cleaned-all-epochs2-20260312
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you ha... | [] |
BRlkl/BingoGuard-bert-large-pt3 | BRlkl | 2025-08-28T05:58:20Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:neuralmind/bert-large-portuguese-cased",
"base_model:finetune:neuralmind/bert-large-portuguese-cased",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-08-28T04:18:49Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BingoGuard-bert-large-pt3
This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/n... | [
{
"start": 473,
"end": 475,
"text": "F1",
"label": "training method",
"score": 0.7376524209976196
},
{
"start": 1152,
"end": 1154,
"text": "F1",
"label": "training method",
"score": 0.7543559074401855
}
] |
akhaliq/GemmaGradio | akhaliq | 2025-08-20T21:39:33Z | 3 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-20T21:34:15Z | # Model Card for GemmaGradio
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could... | [] |
vidore/colqwen2.5-v0.2 | vidore | 2025-06-16T12:04:42Z | 74,618 | 98 | colpali | [
"colpali",
"safetensors",
"vidore",
"vidore-experimental",
"visual-document-retrieval",
"en",
"arxiv:2004.12832",
"arxiv:2407.01449",
"arxiv:2106.09685",
"base_model:vidore/colqwen2.5-base",
"base_model:finetune:vidore/colqwen2.5-base",
"license:mit",
"region:us"
] | visual-document-retrieval | 2025-01-31T13:26:42Z | # ColQwen2.5: Visual Retriever based on Qwen2.5-VL-3B-Instruct with ColBERT strategy
ColQwen is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs) to efficiently index documents from their visual features.
It is a [Qwen2.5-VL-3B](https://huggingface.co/Qwen/Qwen2.5... | [] |
QuiteLLM/sn97-test-ver87-checkpoint-3500 | QuiteLLM | 2026-04-24T19:45:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:QuiteLLM/sn97-test-ver85-ckpt1575",
"base_model:finetune:QuiteLLM/sn97-test-ver85-ckpt1575",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-24T19:44:25Z | # Model Card for v87
This model is a fine-tuned version of [QuiteLLM/sn97-test-ver85-ckpt1575](https://huggingface.co/QuiteLLM/sn97-test-ver85-ckpt1575).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time mach... | [] |
WindyWord/translate-en-tiv | WindyWord | 2026-04-27T23:57:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"english",
"tiv",
"en",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | translation | 2026-04-17T02:30:12Z | # WindyWord.ai Translation — English → Tiv
**Translates English → Tiv.**
**Quality Rating: ⭐⭐⭐½ (3.5★ Standard)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs.
## Quality & Pricing Tier
- **5-star rating:** 3.5★ ⭐⭐⭐½
- **Tier:** Standard
- **Composite scor... | [] |
titouv/gemma-ukraine-finetuned | titouv | 2026-01-25T21:11:07Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-12b-pt",
"base_model:finetune:google/gemma-3-12b-pt",
"endpoints_compatible",
"region:us"
] | null | 2026-01-25T17:53:54Z | # Model Card for gemma-ukraine-finetuned
This model is a fine-tuned version of [google/gemma-3-12b-pt](https://huggingface.co/google/gemma-3-12b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine,... | [] |
ZDCSlab/ripd-ultra-real-llama3-8b-instruct-seed-bt | ZDCSlab | 2026-02-22T13:21:30Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"alignment",
"evaluation",
"preference-learning",
"ripd",
"text-generation",
"conversational",
"dataset:ZDCSlab/ripd-dataset",
"arxiv:2602.13576",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"endp... | text-generation | 2026-02-19T21:22:08Z | # ZDCSlab/ripd-ultra-real-llama3-8b-instruct-seed-bt
This checkpoint is part of the artifact release for
**“Rubrics as an Attack Surface: Stealthy Preference Drift in LLM Judges.”**
It is a policy model trained under a specific rubric condition to study how evaluation-time preference drift propagates into downstrea... | [] |
uva-cv-lab/FrameINO_Wan2.2_5B_Stage1_Motion_v1.5 | uva-cv-lab | 2025-11-19T16:58:27Z | 30 | 3 | diffusers | [
"diffusers",
"safetensors",
"arxiv:2505.21491",
"license:gpl-3.0",
"region:us"
] | null | 2025-10-26T18:45:46Z | <div align="center">
# Frame In-N-Out: Unbounded Controllable Image-to-Video Generation
</div>
<div align="center">
<a href=https://uva-computer-vision-lab.github.io/Frame-In-N-Out/ target="_blank"><img src=https://img.shields.io/badge/Project%20Page-333399.svg?logo=homepage height=22px></a>
<a href=https://hu... | [] |
Horbee/bert-german-offensive-comment-classifier | Horbee | 2025-11-10T20:09:51Z | 1 | 0 | null | [
"safetensors",
"bert",
"text-classification",
"de",
"base_model:google-bert/bert-base-german-cased",
"base_model:finetune:google-bert/bert-base-german-cased",
"license:mit",
"region:us"
] | text-classification | 2025-11-10T19:12:10Z | # Horbee/bert-german-offensive-comment-classifier aka SauerBERT
SauerBERT is a fine-tuned German BERT-based transformer model for offensive comment detection.
It was trained on a balanced dataset of 8,000 examples from the GermEval 2018 and 2019 shared tasks, fine-tuned for 2 epochs. The model achieves strong perform... | [] |
WindyWord/translate-mkh-en | WindyWord | 2026-04-20T13:31:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"mon-khmer",
"vietnamese",
"khmer",
"mon",
"english",
"mkh",
"en",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | translation | 2026-04-19T04:59:00Z | # WindyWord.ai Translation — Mon-Khmer → English
**Translates Mon-Khmer (Vietnamese, Khmer, Mon) → English.**
**Quality Rating: ⭐⭐½ (2.5★ Basic)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs.
## Quality & Pricing Tier
- **5-star rating:** 2.5★ ⭐⭐½
- **Tie... | [] |
HiDream-ai/HiDream-I1-Fast | HiDream-ai | 2025-06-16T16:18:12Z | 54,126 | 104 | diffusers | [
"diffusers",
"safetensors",
"image-generation",
"HiDream.ai",
"text-to-image",
"en",
"arxiv:2505.22705",
"license:mit",
"diffusers:HiDreamImagePipeline",
"region:us"
] | text-to-image | 2025-04-06T14:18:51Z | 
`HiDream-I1` is a new open-source image generative foundation model with 17B parameters that achieves state-of-the-art image generation quality within seconds.
<span style="color: #FF5733; font-weight: bold">For more features and to experience the full capabilities of our product, please ... | [] |
0danielfonseca/Doninha | 0danielfonseca | 2026-03-16T22:48:34Z | 3 | 0 | null | [
"region:us"
] | null | 2026-03-16T22:37:35Z | # Modelo Híbrido de LLM — Daniel Fonseca
> *"Lógica Paraconsistente + Juízo Kantiano + Tábua de Conceitos = Explosão Gentil (sem trivialização)"*
## Arquitetura
```
PROMPT
└── [L1] Tábua de Conceitos (Aristóteles: Categorias)
└── [L2] Juízos Kantianos (Kant: Crítica da Razão Pura §9)
... | [] |
mthirumalai/so101-picknplace3-policy | mthirumalai | 2026-02-14T23:42:16Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:mthirumalai/so101-picknplace3",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-14T23:42:06Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
36n9/Vehuiah-Draco-20260425_054423 | 36n9 | 2026-04-25T05:44:27Z | 0 | 0 | transformers | [
"transformers",
"autonomous-ai",
"self-improving",
"perpetual-learning",
"research-automation",
"knowledge-synthesis",
"sel-1.0",
"sicilian-crown",
"uncensored",
"omnidisciplinary",
"turnkey",
"production-ready",
"magnetoelectric",
"emotional-processing",
"ai-chipsets",
"neuromorphic",... | question-answering | 2026-04-25T05:44:24Z | ---
license: other
library_name: transformers
tags:
- autonomous-ai
- self-improving
- perpetual-learning
- research-automation
- knowledge-synthesis
- sel-1.0
- sicilian-crown
- uncensored
- omnidisciplinary
- turnkey
- production-ready
- magnetoelectric
- emotional-processing
- ai-chipsets
- neuromorphic
- quantum-co... | [] |
tytytyyt/distilbert-base-uncased-distilled-clinc | tytytyyt | 2026-01-16T22:17:51Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-01-16T22:03:39Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.... | [
{
"start": 193,
"end": 232,
"text": "distilbert-base-uncased-distilled-clinc",
"label": "training method",
"score": 0.8843860626220703
},
{
"start": 275,
"end": 298,
"text": "distilbert-base-uncased",
"label": "training method",
"score": 0.8697339296340942
},
{
"s... |
patmodels/bpm | patmodels | 2026-01-10T16:00:31Z | 1 | 1 | null | [
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null | 2025-08-08T03:51:02Z | # BAAI Patent Model (BPM)
## Model Overview
**Breaking language barriers in intellectual property innovation.**
BAAI Patent Model represents a significant advancement in specialized machine translation, designed specifically for the unique linguistic challenges of patent documentation. Built upon the robust Qwen2.5-... | [] |
random-sequence/peak-bloom-ember | random-sequence | 2026-02-25T09:46:11Z | 0 | 0 | null | [
"federated-learning",
"fl-alliance",
"defense_isr",
"license:apache-2.0",
"region:us"
] | null | 2026-02-25T09:46:09Z | # FL-Alliance Federated Model: peak-bloom-ember
This model was trained using **FL-Alliance** decentralized federated learning.
## Training Details
| Parameter | Value |
|-----------|-------|
| Task Type | `defense_isr` |
| Total Rounds | 5 |
| Model Hash | `0ddc82b6cf3077ea72ea83162dcf6bf92df64fe7d684079c20264ae10a3... | [
{
"start": 719,
"end": 744,
"text": "on-chain consensus voting",
"label": "training method",
"score": 0.8075606226921082
}
] |
cloudyu/GPT-OSS-120B-MLX-q4-Claude-4.6-Opus-Reasoning-Distilled | cloudyu | 2026-04-10T11:46:44Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"en",
"4-bit",
"region:us"
] | text-generation | 2026-04-10T04:32:30Z | # Model Card: `cloudyu/GPT-OSS-120B-MLX-q4-Claude-4.6-Opus-Reasoning-Distilled`
This is a **LoRA‑fine‑tuned** version of [openai/gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b) (converted to MLX 4‑bit format) that has been specialised for **step‑by‑step reasoning** on mathematical, coding and algorithmic pro... | [] |
Pickamon/CogniTune-Qwen2.5-3B | Pickamon | 2026-03-26T06:47:22Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen2",
"lora",
"fine-tuned",
"education",
"ai-tutor",
"en",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"region:us"
] | null | 2026-03-26T06:35:03Z | # CogniTune-Qwen2.5-3B
A domain-specialized AI/ML tutor model fine-tuned from Qwen2.5-3B-Instruct
using LoRA on Apple Silicon (M5 Pro, 24GB unified memory) via MLX.
## What It Does
Standard LLMs respond to AI/ML questions like encyclopedias — dense,
exhaustive, impersonal. CogniTune responds like a tutor — leading... | [
{
"start": 106,
"end": 110,
"text": "LoRA",
"label": "training method",
"score": 0.8748271465301514
},
{
"start": 1400,
"end": 1404,
"text": "LoRA",
"label": "training method",
"score": 0.8928123116493225
},
{
"start": 1501,
"end": 1505,
"text": "LoRA",
... |
sartifyllc/Pawa-Gemma-Swahili-2B | sartifyllc | 2025-01-14T10:21:06Z | 1,640 | 3 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"sw",
"en",
"base_model:google/gemma-2-2b",
"base_model:finetune:google/gemma-2-2b",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-01-13T20:15:37Z | # PAWA: Swahili SML for Various Tasks
---
## Overview
**PAWA** is a Swahili-specialized language model designed to excel in tasks requiring nuanced understanding and interaction in Swahili and English. It leverages supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) for improved performance and con... | [] |
prasanacodes/svara-tts-v1-Q4_K_M-GGUF | prasanacodes | 2026-05-03T15:58:57Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-to-speech",
"speech-synthesis",
"multilingual",
"indic",
"orpheus",
"lora",
"low-latency",
"zero-shot",
"emotions",
"discrete-audio-tokens",
"llama-cpp",
"gguf-my-repo",
"hi",
"bn",
"mr",
"te",
"kn",
"bho",
"mag",
"hne",
"mai",
"as",
"brx... | text-to-speech | 2026-05-03T15:58:41Z | # prasanacodes/svara-tts-v1-Q4_K_M-GGUF
This model was converted to GGUF format from [`kenpath/svara-tts-v1`](https://huggingface.co/kenpath/svara-tts-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.c... | [] |
ReadyArt/Dark-Nexus-32B-v2.0-EXL3 | ReadyArt | 2025-10-31T17:14:22Z | 3 | 0 | null | [
"nsfw",
"explicit",
"roleplay",
"unaligned",
"dangerous",
"ERP",
"Other License",
"base_model:ReadyArt/Dark-Nexus-32B-v2.0",
"base_model:quantized:ReadyArt/Dark-Nexus-32B-v2.0",
"license:other",
"region:us"
] | null | 2025-10-31T17:13:16Z | <style>
:root {
--dark-bg: #0a0505;
--lava-red: #ff3300;
--lava-orange: #ff6600;
--lava-yellow: #ff9900;
--neon-blue: #00ccff;
--neon-purple: #cc00ff;
}
* {
margin: 0;
padding: 0;
box-siz... | [] |
tatsuji1962/qwen3-4b-structured-output-loralev.09 | tatsuji1962 | 2026-02-19T14:17:54Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-19T14:17:47Z | tatsuji1962/qwen3-4b-structured-output-loralev.09
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trai... | [
{
"start": 151,
"end": 156,
"text": "QLoRA",
"label": "training method",
"score": 0.7599544525146484
}
] |
contemmcm/9892c69b8ce660332f64cf05e2154f17 | contemmcm | 2025-10-18T03:03:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-3b",
"base_model:finetune:google-t5/t5-3b",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-10-17T23:59:16Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 9892c69b8ce660332f64cf05e2154f17
This model is a fine-tuned version of [google-t5/t5-3b](https://huggingface.co/google-t5/t5-3b) ... | [] |
onnxmodelzoo/caffenet-12-int8 | onnxmodelzoo | 2025-09-29T18:21:16Z | 0 | 0 | null | [
"onnx",
"validated",
"vision",
"classification",
"caffenet",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-29T18:21:10Z | <!--- SPDX-License-Identifier: BSD-3-Clause -->
# CaffeNet
|Model |Download |Download (with sample test data)| ONNX version |Opset version|Top-1 accuracy (%)|Top-5 accuracy (%)|
| ------------- | ------------- | ------------- | ------------- | ------------- |------------- | ------------- |
|CaffeNet| [2... | [] |
Tilas/distilhubert-finetuned-gtzan | Tilas | 2026-02-23T15:12:41Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2026-02-23T14:16:44Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distil... | [] |
mr7371/distilbert-financial-phrasebank-allagree_LoRA_r4 | mr7371 | 2025-12-01T00:36:28Z | 0 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:mr7371/distilbert-financial-phrasebank-allagree_headonly",
"base_model:finetune:mr7371/distilbert-financial-phrasebank-allagree_headonly",
"license:apache-2.0",
"text-embeddings-inference",
"end... | text-classification | 2025-12-01T00:36:17Z | <!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert-financial-phrasebank-allagree_LoRA_r4
This model is a fine-tuned version of [mr7371/distilbert-financial-phrasebank-allagree... | [] |
Vara1605454/bert-finetuned-imdb | Vara1605454 | 2025-11-18T18:49:33Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-18T18:49:15Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-imdb
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unk... | [] |
Mirnegg/r1_qwen_1_5b_limo_sft_ep-2 | Mirnegg | 2025-11-21T02:27:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"license:other",
"text-generation-inference",... | text-generation | 2025-11-21T02:22:52Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# r1_qwen_1_5b_limo_sft_cleaned_ep-2
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://hugg... | [] |
newreyy/sentiment-analysis-base | newreyy | 2026-01-28T04:11:15Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"id",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-01-28T04:04:05Z | # Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
This model is a fine-tuned version of [IndoBertweet-base-uncased](https://huggingface.co/indolem/indobertweet-base-uncased) for Indonesian sentiment analysis. The model is designed to c... | [] |
shoumenchougou/RWKV7-G1f-1.5B-GGUF | shoumenchougou | 2026-04-21T08:23:05Z | 0 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-21T08:07:47Z | ## 1️⃣ What are G0 / G1 / G1a2 / G1b / G1c / G1d / G1e ?
The fields like G0 / G1a / G1b in RWKV model names indicate versions of the training data. In terms of data quality, the ranking is: **G1e > G1d > G1c > G1b > G1a2 > G1a > G1 > G0a2 > G0**.
The RWKV7-G1a model is an advanced version of RWKV7-G1 that was furthe... | [] |
architchitte/Construction-Hazard-Detection | architchitte | 2026-03-14T04:36:34Z | 92 | 0 | ultralytics | [
"ultralytics",
"onnx",
"object-detection",
"yolo26",
"yolo11",
"pytorch",
"construction-safety",
"hazard-detection",
"en",
"dataset:custom",
"license:agpl-3.0",
"region:us"
] | object-detection | 2026-03-14T04:36:34Z | # Construction-Hazard-Detection
YOLO-based (primarily YOLO26) models for construction-site hazard detection. These models detect:
- Workers without helmets and/or safety vests
- Workers near machinery or vehicles
- Workers in restricted areas (derived from safety cone clustering)
- Machinery/vehicles near utility pol... | [
{
"start": 391,
"end": 395,
"text": "ONNX",
"label": "training method",
"score": 0.7195544838905334
},
{
"start": 1234,
"end": 1238,
"text": "ONNX",
"label": "training method",
"score": 0.7093192338943481
}
] |
Odog16/tool_pickup_ACT_policy_B | Odog16 | 2026-02-26T21:27:57Z | 51 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:Odog16/tool_pickup",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-26T21:24:56Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
CloveAI/clov-bio-0.3b-instruct | CloveAI | 2026-04-13T15:52:47Z | 12 | 0 | null | [
"safetensors",
"biogpt",
"license:mit",
"region:us"
] | null | 2025-11-09T07:43:49Z | ### 📘 Model Overview
This model is a **LoRA fine-tuned version** of Microsoft’s [BioGPT](https://huggingface.co/microsoft/biogpt), specialized for **instruction-style question answering and reasoning** in the **biomedical and healthcare domain**.
It was trained using **2,000 medical instruction–response pairs** to e... | [
{
"start": 40,
"end": 44,
"text": "LoRA",
"label": "training method",
"score": 0.8701487183570862
},
{
"start": 1079,
"end": 1083,
"text": "LoRA",
"label": "training method",
"score": 0.8774198293685913
}
] |
mimimimi2002/smolvla_spatial_finetuning | mimimimi2002 | 2025-12-15T20:09:11Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:mimimimi2002/libero_spatial",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-15T20:08:57Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
sainikhiljuluri2015/Foundation-Sec-Cybersecurity-8B-Merged | sainikhiljuluri2015 | 2025-12-06T20:08:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"cybersecurity",
"security",
"foundation-sec",
"fine-tuned",
"merged",
"conversational",
"en",
"dataset:Trendyol/Trendyol-Cybersecurity-Instruction-Tuning-Dataset",
"dataset:AlicanKiraz0/Cybersecurity-Dataset-Fenrir-v2.0",
"dataset... | text-generation | 2025-12-05T19:57:22Z | # Foundation-Sec-Cybersecurity-8B-Merged
Fine-tuned **fdtn-ai/Foundation-Sec-8B** specialized for **cybersecurity** tasks.
This is a merged model (LoRA weights merged into base) for easy deployment.
## Model Description
This model was trained on ~50,000 cybersecurity instruction-response pairs from:
- Trendyol Cyber... | [] |
EricB/Qwen3.5-35B-A3B-UQFF | EricB | 2026-03-14T10:53:55Z | 20 | 0 | null | [
"qwen3_5_moe",
"uqff",
"mistral.rs",
"region:us"
] | null | 2026-03-14T10:39:49Z | # `Qwen/Qwen3.5-35B-A3B-UQFF`, UQFF quantization
Run with [mistral.rs](https://github.com/EricLBuehler/mistral.rs). Documentation: [UQFF docs](https://github.com/EricLBuehler/mistral.rs/blob/master/docs/UQFF.md).
1) **Flexible**: Multiple quantization formats in *one* file format with *one* framework to run them all.... | [] |
rez0/gguf-vocab-heap-oob-poc | rez0 | 2026-02-24T03:36:41Z | 54 | 0 | null | [
"gguf",
"security-research",
"llama-cpp",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2026-02-24T03:36:38Z | # GGUF Vocab Heap Buffer Over-Read — Security Research PoC
**This is a security research proof of concept. Do not use in production.**
## Vulnerability
Crafted GGUF model files cause a heap buffer over-read in llama.cpp when the `tokenizer.ggml.scores` or `tokenizer.ggml.token_type` arrays have fewer elements than t... | [] |
CelesteImperia/Whisper-Large-v3-Turbo-GGML | CelesteImperia | 2026-03-26T22:38:11Z | 0 | 0 | whisper.cpp | [
"whisper.cpp",
"whisper",
"ggml",
"whisper-cpp",
"audio",
"transcription",
"celeste-imperia",
"automatic-speech-recognition",
"base_model:openai/whisper-large-v3-turbo",
"base_model:finetune:openai/whisper-large-v3-turbo",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | 2026-03-26T22:36:08Z | # Whisper-Large-v3-Turbo-GGML (Platinum Series)



[](https://razorpay.... | [] |
FrankYuzhe/lemon_box_0226_merged_200_0227_141701 | FrankYuzhe | 2026-02-27T23:41:24Z | 288 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:FrankYuzhe/lemon_box_0226_merged_200",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-27T23:41:08Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
mradermacher/Hermes-4.3-36B-heretic-GGUF | mradermacher | 2025-12-15T16:46:04Z | 215 | 1 | transformers | [
"transformers",
"gguf",
"Bytedance Seed",
"instruct",
"finetune",
"reasoning",
"hybrid-mode",
"chatml",
"function calling",
"tool use",
"json mode",
"structured outputs",
"atropos",
"dataforge",
"long context",
"roleplaying",
"chat",
"heretic",
"uncensored",
"decensored",
"ab... | null | 2025-12-15T03:56:55Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
JackAILab/StarVLA_WM4A | JackAILab | 2026-04-15T16:40:37Z | 0 | 1 | starvla | [
"starvla",
"robotics",
"vision-language-action",
"vla",
"libero",
"franka",
"manipulation",
"cosmos-predict2",
"dataset:openvla/modified_libero_rlds",
"base_model:nvidia/Cosmos-Predict2-2B-Video2World",
"base_model:finetune:nvidia/Cosmos-Predict2-2B-Video2World",
"license:apache-2.0",
"regio... | robotics | 2026-04-15T16:07:06Z | # StarVLA-WM4A (LIBERO)
**StarVLA-WM4A** is a Vision-Language-Action (VLA) policy built on top of the
[StarVLA](https://github.com/starVLA/starVLA) framework. It couples the
[Cosmos-Predict2](https://huggingface.co/nvidia/Cosmos-Predict2-2B-Video2World)
video world model as a frozen perception backbone with a lightwei... | [] |
saparbayev-azizbek/aidentist | saparbayev-azizbek | 2026-03-04T08:35:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-7B-Instruct-1M",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct-1M",
"endpoints_compatible",
"region:us"
] | null | 2026-02-26T13:23:07Z | # Model Card for aidentist
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct-1M](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-1M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, b... | [] |
zlyngkhoi/grpo_biogrid_qwen_3g-1.7b | zlyngkhoi | 2026-02-19T12:00:21Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"grpo",
"trackio:https://huggingface.co/spaces/zlyngkhoi/grpo_biogrid_qwen_3g-1.7b",
"trl",
"trackio",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B... | text-generation | 2026-02-19T09:24:38Z | # Model Card for grpo_biogrid_qwen_3g-1.7b
This model is a fine-tuned version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could... | [] |
hamzax001/sentence_seg | hamzax001 | 2025-10-11T13:45:47Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:24004",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:... | sentence-similarity | 2025-10-11T13:45:30Z | # SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vecto... | [] |
bmbgsj/REVEAL_think_3class | bmbgsj | 2026-04-23T14:41:13Z | 24 | 0 | null | [
"safetensors",
"qwen3",
"aigc-detection",
"text-classification",
"qwen",
"en",
"arxiv:2604.19172",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"region:us"
] | text-classification | 2026-04-11T08:27:39Z | # REVEAL_think_3class
**REVEAL-think-3class** is a reasoning-driven AI-Generated Content (AIGC) detection model based on Qwen3-8B. It uses a **Think-then-Answer** paradigm, generating a transparent reasoning chain (`<think>...</think>`) before outputting the final fine-grained classification (`<answer>...</answer>`).
... | [] |
zsoo0o/pi05_pick-place-cup-2 | zsoo0o | 2026-02-08T22:44:28Z | 2 | 0 | lerobot | [
"lerobot",
"safetensors",
"pi05",
"robotics",
"dataset:pick-place-cup-2",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-17T13:09:48Z | # Model Card for pi05
<!-- Provide a quick summary of what the model is/does. -->
**π₀.₅ (Pi05) Policy**
π₀.₅ is a Vision-Language-Action model with open-world generalization, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀.₅ repres... | [] |
NeffJippardo/gemma_checkpoints | NeffJippardo | 2025-09-15T21:26:51Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-09-15T09:47:36Z | # Model Card for gemma_checkpoints
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but... | [] |
mradermacher/SvS-LLama-8B-GGUF | mradermacher | 2025-12-11T17:16:41Z | 25 | 1 | transformers | [
"transformers",
"gguf",
"en",
"dataset:RLVR-SvS/Variational-DAPO",
"base_model:RLVR-SvS/SvS-LLama-8B",
"base_model:quantized:RLVR-SvS/SvS-LLama-8B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-11T12:51:36Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
84basi/lora-5-35-phase1 | 84basi | 2026-02-14T18:47:26Z | 1 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:unsloth/Qwen3-4B-Instruct-2507",
"base_model:adapter:unsloth/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-14T18:46:53Z | qwen3-4b-structured-output-lora
This repository provides a **LoRA adapter** fine-tuned from
**unsloth/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve ... | [
{
"start": 95,
"end": 102,
"text": "unsloth",
"label": "training method",
"score": 0.8831250667572021
},
{
"start": 136,
"end": 141,
"text": "QLoRA",
"label": "training method",
"score": 0.8237318396568298
},
{
"start": 539,
"end": 546,
"text": "unsloth",
... |
hrl7/so101-move-v2-smolvla | hrl7 | 2026-03-26T22:14:25Z | 29 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:hrl7/so101-move-v2",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-26T22:13:43Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
53845714nF/opcode_BERT_embedding | 53845714nF | 2026-02-14T12:47:27Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2026-02-14T12:45:35Z | # opcode_BERT_embedding
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes ea... | [] |
mradermacher/HER-32B-GGUF | mradermacher | 2026-02-04T01:43:37Z | 13 | 0 | transformers | [
"transformers",
"gguf",
"roleplay",
"dialogue",
"multi-turn",
"qwen",
"reinforcement-learning",
"chat",
"zh",
"en",
"base_model:ChengyuDu0123/HER-32B",
"base_model:quantized:ChengyuDu0123/HER-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | reinforcement-learning | 2026-02-01T03:47:49Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
rcdoug03/sd35-lora-style_guard-none-Georgia_OKeeffe | rcdoug03 | 2026-02-24T17:48:26Z | 3 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"sd3.5-large",
"sd3.5",
"sd3.5-diffusers",
"base_model:stabilityai/stable-diffusion-3.5-large",
"base_model:adapter:stabilityai/stable-diffusion-3.5-large",
"license:other",
"region:us"
] | text-to-image | 2026-02-24T16:06:54Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SD3.5-Large DreamBooth LoRA - rcdoug03/sd35-lora-style_guard-none-Georgia_OKeeffe
<Gallery />
## Model description
The... | [] |
dontia/medgemma-4b-it-Q4_K_M-GGUF | dontia | 2025-09-16T13:27:11Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"medical",
"radiology",
"clinical-reasoning",
"dermatology",
"pathology",
"ophthalmology",
"chest-x-ray",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:google/medgemma-4b-it",
"base_model:quantized:google/medgemma-4b-it",
"license:other",
"endpo... | image-text-to-text | 2025-09-16T13:26:54Z | # dontia/medgemma-4b-it-Q4_K_M-GGUF
This model was converted to GGUF format from [`google/medgemma-4b-it`](https://huggingface.co/google/medgemma-4b-it) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/... | [] |
thelamapi/next-ocr-i1-GGUF | thelamapi | 2026-03-11T17:38:38Z | 2,573 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3_vl",
"trl",
"sft",
"chemistry",
"code",
"climate",
"art",
"biology",
"finance",
"legal",
"music",
"medical",
"agent",
"image-text-to-text",
"en",
"ab",
"aa",
"ae",
"af",
"ak",
"am",
"an",
"ar",... | image-text-to-text | 2026-03-11T17:18:01Z | <img src='bannerocr.png'>
# 🖼️ Next OCR 8B
### *Compact OCR AI — Accurate, Fast, Multilingual, Math-Optimized*
[](https://opensource.org/licenses/MIT)
[]()
[![Huggin... | [] |
sidcraftscode/smollm2-1.7b-distilled-gpt-oss-20b | sidcraftscode | 2025-08-12T21:58:21Z | 15 | 3 | null | [
"safetensors",
"llama",
"knowledge-distillation",
"gpt-oss-20b",
"smollm2",
"text-generation",
"conversational",
"base_model:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-08-09T06:39:46Z | # SmolLM2-1.7B Distilled from GPT-OSS-20B
This model is a distilled version of SmolLM2-1.7B-Instruct that was trained using knowledge distillation from OpenAI's gpt-oss-20b.
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("sidcraftscode/smo... | [] |
pirola/Devstral-24B-NVFP4-NVembed | pirola | 2026-03-02T06:29:19Z | 21 | 0 | null | [
"safetensors",
"mistral",
"modelopt",
"region:us"
] | null | 2026-03-02T02:39:21Z | # Devstral-24B-NVFP4-NVembed
NVFP4-quantized [Devstral-Small-2-24B-Instruct](https://huggingface.co/mistralai/Devstral-Small-2-24B-Instruct) — a 24B-parameter dense coding model that fits in 12.4 GB VRAM and runs **31K context** on a single RTX 5080.
## What's special
- **Full NVFP4**: all Linear layers, `lm_head`, ... | [] |
offiongbassey/efik_xlsr_asr | offiongbassey | 2026-01-12T17:35:39Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2026-01-12T17:26:46Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# efik_xlsr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300... | [] |
mradermacher/kava-0.8-ft-GGUF | mradermacher | 2026-03-11T10:02:25Z | 626 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3_5",
"en",
"base_model:parkky21/kava-0.8-ft",
"base_model:quantized:parkky21/kava-0.8-ft",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-11T09:58:39Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
lakelee/RLB_MLP_BC_v3.20250829.23_2_fromrl_rlcompat_A1v1 | lakelee | 2025-08-29T14:30:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"regular_mlp_checkpoint",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-08-29T14:29:22Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RLB_MLP_BC_v3.20250829.23_2_fromrl_rlcompat_A1v1
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown ... | [] |
Hizaneko/lora_agent_nyan3.1.5 | Hizaneko | 2026-03-01T14:14:46Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:u-10bei/sft_alfworld_trajectory_dataset_v5",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache... | text-generation | 2026-03-01T13:17:06Z | # lora_agent_nyan3.1.5
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **multi-turn agent t... | [
{
"start": 53,
"end": 57,
"text": "LoRA",
"label": "training method",
"score": 0.8827404379844666
},
{
"start": 124,
"end": 128,
"text": "LoRA",
"label": "training method",
"score": 0.9131608605384827
},
{
"start": 170,
"end": 174,
"text": "LoRA",
"lab... |
satoko8514/EPOCHS2 | satoko8514 | 2026-03-01T07:38:00Z | 14 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-03-01T07:37:48Z | <qwen3-4b-structured-output-lora>
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve *... | [
{
"start": 135,
"end": 140,
"text": "QLoRA",
"label": "training method",
"score": 0.7634518146514893
}
] |
flexitok/bpe_arb_Arab_32000_v2 | flexitok | 2026-04-14T03:00:39Z | 0 | 0 | null | [
"tokenizer",
"bpe",
"flexitok",
"fineweb2",
"arb",
"license:mit",
"region:us"
] | null | 2026-04-14T03:00:38Z | # Byte-Level BPE Tokenizer: arb_Arab (32K)
A **Byte-Level BPE** tokenizer trained on **arb_Arab** data from Fineweb-2-HQ.
## Training Details
| Parameter | Value |
|-----------|-------|
| Algorithm | Byte-Level BPE |
| Language | `arb_Arab` |
| Target Vocab Size | 32,000 |
| Final Vocab Size | 32,000 |
| Pre-tokeniz... | [] |
thejaminator/misalignedfacts-then-riskyfinance-5perc-20251008 | thejaminator | 2025-10-08T15:00:40Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"base_model:unsloth/Qwen3-8B",
"base_model:adapter:unsloth/Qwen3-8B",
"region:us"
] | null | 2025-10-08T15:00:31Z | # LoRA Adapter for SFT
This is a LoRA (Low-Rank Adaptation) adapter trained using supervised fine-tuning (SFT).
## Base Model
- **Base Model**: `unsloth/Qwen3-8B`
- **Adapter Type**: LoRA
- **Task**: Supervised Fine-Tuning
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft imp... | [] |
shikhinvc/gpt-oss-20b-fahdmirza | shikhinvc | 2025-09-14T11:11:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-22T16:14:51Z | # Model Card for gpt-oss-20b-fahdmirza
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but cou... | [] |
Tombiczek/all-MiniLM-L6-v2_fine-tuned-cosqa | Tombiczek | 2025-10-30T12:02:01Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:9008",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:f... | sentence-similarity | 2025-10-30T12:01:09Z | # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector s... | [] |
Vikram1234321/DeepSeek-V4-Pro | Vikram1234321 | 2026-04-26T16:58:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"deepseek_v4",
"text-generation",
"license:mit",
"endpoints_compatible",
"8-bit",
"fp8",
"region:us"
] | text-generation | 2026-04-26T16:58:46Z | # DeepSeek-V4: Towards Highly Efficient Million-Token Context Intelligence
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" w... | [] |
mradermacher/qwen-lancer-7b-GGUF | mradermacher | 2025-12-27T04:11:23Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"en",
"base_model:riefer02/qwen-lancer-7b",
"base_model:quantized:riefer02/qwen-lancer-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-27T03:40:51Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
Thireus/Qwen3-4B-Thinking-2507-THIREUS-Q5_0_R4-SPECIAL_SPLIT | Thireus | 2026-02-11T23:34:41Z | 4 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-29T05:52:11Z | # Qwen3-4B-Thinking-2507
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/Qwen3-4B-Thinking-2507-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the Qwen3-4B-Thinking-2507 model (official repo: https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507). T... | [] |
mradermacher/Generator3B-V0.2-GGUF | mradermacher | 2026-01-04T09:15:02Z | 8 | 0 | transformers | [
"transformers",
"gguf",
"trl",
"unsloth",
"smollm3",
"text-generation-inference",
"en",
"dataset:GODELEV/Golden-Dataset-Beta3",
"base_model:GODELEV/Generator3B-V0.2",
"base_model:quantized:GODELEV/Generator3B-V0.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational... | null | 2026-01-03T19:19:44Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
simaai/Qwen3-VL-4B-Instruct-GPTQ-a16w4 | simaai | 2026-04-29T06:56:18Z | 0 | 0 | llima | [
"llima",
"vision",
"image-text-to-text",
"generative_ai",
"embedded",
"sima",
"qwen",
"base_model:Qwen/Qwen3-VL-4B-Instruct",
"base_model:finetune:Qwen/Qwen3-VL-4B-Instruct",
"license:other",
"region:us"
] | image-text-to-text | 2026-01-10T14:53:24Z | # Qwen3-VL-4B-Instruct: Optimized for SiMa.ai Modalix
## Overview
This repository contains the **Qwen3-VL-4B-Instruct** model, optimized and compiled for the **SiMa.ai Modalix** platform.
- **Model Architecture:** Qwen3-VL (4B parameters)
- **Quantization:** Hybrid
- **Prompt Processing:** A16W8 (16-bit activation... | [] |
xummer/qwen3-8b-xquad-lora-zh | xummer | 2026-03-11T10:44:50Z | 13 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen3-8B",
"llama-factory",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-8B",
"license:other",
"region:us"
] | text-generation | 2026-03-11T10:44:32Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zh
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the xquad_zh_train dataset.
It ... | [] |
je-suis-tm/sayama_ai_lora_flux_nf4 | je-suis-tm | 2026-01-02T11:02:09Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"qlora",
"flux",
"nf4",
"template:diffusion-lora",
"dataset:je-suis-tm/sayama_ai_lora_flux_nf4",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:mit",
"region:us"
] | text-to-image | 2025-12-07T12:17:51Z | # Sayama Ai Lora Flux NF4
<Gallery />
佐山愛 / さやまあい / Sayama Ai
All files are also archived in [https://github.com/je-suis-tm/huggingface-archive](https://github.com/je-suis-tm/huggingface-archive) in case this gets censored.
The QLoRA fine-tuning process of `sayama_ai_lora_flux_nf4` takes inspiration from [this post... | [] |
mradermacher/aya-expanse-32b-abliterated-GGUF | mradermacher | 2024-12-15T05:44:18Z | 257 | 4 | transformers | [
"transformers",
"gguf",
"abliterated",
"uncensored",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"con... | null | 2024-12-15T01:44:18Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/huihui-ai/aya-expanse-32b-abliterated
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingfa... | [] |
models123/LongCat-Image-Edit | models123 | 2026-02-27T17:39:00Z | 6 | 0 | transformers | [
"transformers",
"diffusers",
"safetensors",
"image-to-image",
"en",
"zh",
"arxiv:2512.07584",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-to-image | 2026-02-27T17:38:58Z | <div align="center">
<img src="assets/longcat-image_logo.svg" width="45%" alt="LongCat-Image" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href='https://arxiv.org/pdf/2512.07584'><img src='https://img.shields.io/badge/Technical-Report-red'></a>
<a href='https://github.com/meituan-longcat/Lo... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.