modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
csikasote/mms-1b-all-bemgen-m50f50-ft-sd-dat-gdro-fusion-52 | csikasote | 2026-03-16T23:14:27Z | 459 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"bemgen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2026-03-16T11:09:49Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-bemgen-m50f50-ft-sd-dat-gdro-fusion-52
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface... | [] |
gue22/functiongemma-270m-it-mobile-actions | gue22 | 2026-01-06T21:19:38Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/functiongemma-270m-it",
"base_model:finetune:google/functiongemma-270m-it",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-04T13:23:25Z | # Model Card for functiongemma-mobile-actions
This model is a fine-tuned version of [google/functiongemma-270m-it](https://huggingface.co/google/functiongemma-270m-it). It has been trained using [TRL](https://github.com/huggingface/trl).
Training was done fully local on a PC with a 32GB Nvidia RTX Pro 4500 GPU (compa... | [] |
msj19/org_gdn_1B | msj19 | 2026-03-20T14:02:54Z | 68 | 0 | null | [
"safetensors",
"gated_deltanet",
"arxiv:2409.07146",
"arxiv:2404.06395",
"region:us"
] | null | 2026-03-20T14:00:24Z | <div align="center">
# 🔥 Flame: Flash Linear Attention Made Easy
</div>
> [!IMPORTANT]
> The `flame` project has been migrated to a new project built on torchtitan.
> Please visit the [new repository](https://github.com/fla-org/flame) for details and updates.
>
> The code here is now **archived as legacy**, and ... | [] |
qualiaadmin/d4b107b2-9092-473e-8d4a-5d9a4f395ca2 | qualiaadmin | 2025-11-13T00:38:19Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:Calvert0921/SmolVLA_LiftCube_Franka_100",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-13T00:38:06Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
chankhavu/Nemotron-Cascade2-30B-A3B-Eagle3-Long-Context | chankhavu | 2026-04-22T23:33:27Z | 2,905 | 0 | specforge | [
"specforge",
"safetensors",
"llama",
"eagle3",
"speculative-decoding",
"draft-model",
"sliding-window-attention",
"long-context",
"nemotron",
"mamba",
"hybrid-state-space",
"text-generation",
"en",
"arxiv:2503.01840",
"base_model:nvidia/Nemotron-Cascade-2-30B-A3B",
"base_model:finetune... | text-generation | 2026-04-08T07:34:55Z | # Eagle3 Long-Context Draft Head for Nemotron-Cascade-2-30B-A3B (Sliding-Window 4k)
This is an [Eagle3](https://arxiv.org/abs/2503.01840) speculative-decoding
**draft head** trained against
[`nvidia/Nemotron-Cascade-2-30B-A3B`](https://huggingface.co/nvidia/Nemotron-Cascade-2-30B-A3B)
as the verifier. To our knowledge... | [
{
"start": 1420,
"end": 1434,
"text": "Layer indexing",
"label": "training method",
"score": 0.711322546005249
}
] |
melon1891/agentbench-qwen3-4b-alf-20260301-lr1e6-v4 | melon1891 | 2026-03-01T13:07:24Z | 24 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"agent",
"tool-use",
"alfworld",
"dbbench",
"conversational",
"en",
"dataset:melon1891/alfworld-correction-sft-20260301-45",
"base_model:melon1891/agentbench-qwen3-4b-lr5e6-20260224v2",
"base_model:finetune:melon1891/agentbench-qwen3... | text-generation | 2026-03-01T13:06:15Z | # agentbench-qwen3-4b-alf-20260301-lr1e6-v4
A full model fine-tuned from **melon1891/agentbench-qwen3-4b-lr5e6-20260224v2** using LoRA + Unsloth, with the adapter merged into the base model.
## Training Objective
This model is trained to improve **multi-turn agent task performance**
on ALFWorld (household tasks) and... | [
{
"start": 131,
"end": 135,
"text": "LoRA",
"label": "training method",
"score": 0.9348118305206299
},
{
"start": 138,
"end": 145,
"text": "Unsloth",
"label": "training method",
"score": 0.7154473066329956
},
{
"start": 632,
"end": 636,
"text": "LoRA",
... |
majid230/tetris-gemma3-270m-50k-e2 | majid230 | 2025-09-09T15:26:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-270m-it",
"base_model:finetune:unsloth/gemma-3-270m-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-09-09T14:48:43Z | # Uploaded finetuned model
- **Developed by:** majid230
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-270m-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unsloth... | [
{
"start": 113,
"end": 120,
"text": "unsloth",
"label": "training method",
"score": 0.9213705658912659
},
{
"start": 189,
"end": 196,
"text": "Unsloth",
"label": "training method",
"score": 0.822849690914154
},
{
"start": 227,
"end": 234,
"text": "unsloth"... |
arnomatic/rnj-1-instruct-heretic | arnomatic | 2025-12-13T13:44:59Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"heretic",
"uncensored",
"decensored",
"abliterated",
"conversational",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-13T13:28:40Z | # This is a decensored version of [EssentialAI/rnj-1-instruct](https://huggingface.co/EssentialAI/rnj-1-instruct), made using [Heretic](https://github.com/p-e-w/heretic) v1.1.0
## Abliteration parameters
| Parameter | Value |
| :-------- | :---: |
| **direction_index** | 19.18 |
| **attn.o_proj.max_weight** | ... | [] |
jorgedelpozolerida/Meta-Llama-3-8B-Instruct-Q8_0-GGUF | jorgedelpozolerida | 2025-08-29T14:20:55Z | 3 | 0 | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:quantized:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"endpoints_compatible",
"region:us",
"convers... | text-generation | 2025-08-29T14:20:20Z | # jorgedelpozolerida/Meta-Llama-3-8B-Instruct-Q8_0-GGUF
This model was converted to GGUF format from [`meta-llama/Meta-Llama-3-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to t... | [] |
mradermacher/LegalOne-R1-8B-i1-GGUF | mradermacher | 2026-01-24T08:44:51Z | 4,346 | 0 | transformers | [
"transformers",
"gguf",
"legal",
"zh",
"base_model:CSHaitao/LegalOne-8B",
"base_model:quantized:CSHaitao/LegalOne-8B",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-01-23T22:52:54Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
asterism45/lerobot-groot | asterism45 | 2025-10-30T01:17:42Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"groot",
"robotics",
"dataset:sknjpn/record-test17",
"license:apache-2.0",
"region:us"
] | robotics | 2025-10-30T01:16:42Z | # Model Card for groot
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.... | [] |
Hearcharted/LTX-2.3-Comfy-Folder | Hearcharted | 2026-03-16T02:47:58Z | 54 | 0 | diffusion-single-file | [
"diffusion-single-file",
"comfyui",
"license:other",
"region:us"
] | null | 2026-03-16T01:44:17Z | Separated LTX2.3 checkpoint for alternative way to load the models in Comfy

The fp8 quantizations were done with the basic static weight scales and are set to not run with fp8 matmuls, the models marke... | [] |
rsoohyun213/Qwen2.5-VL-3B-Instruct-v6_s5_exp1_only_blocks_ver3-full_SFT | rsoohyun213 | 2026-05-03T16:58:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-05-03T16:51:48Z | # Model Card for Qwen2.5-VL-3B-Instruct@v6+s5_exp1_only_blocks_ver3@full_SFT
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import ... | [] |
Muapi/classic-doomguy | Muapi | 2025-08-18T04:18:24Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T04:17:38Z | # Classic Doomguy

**Base model**: Flux.1 D
**Trained words**: doomguy
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type":... | [] |
hgoto666/qwen3-4b-dbv4-alfv5-lora | hgoto666 | 2026-02-16T12:58:09Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:u-10bei/dbbench_sft_dataset_react_v4",
"dataset:u-10bei/sft_alfworld_trajectory_dataset_v5",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapt... | text-generation | 2026-02-16T12:51:59Z | # qwen3-4b-dbv4-alfv5-lora
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **multi-turn age... | [
{
"start": 57,
"end": 61,
"text": "LoRA",
"label": "training method",
"score": 0.8922567367553711
},
{
"start": 128,
"end": 132,
"text": "LoRA",
"label": "training method",
"score": 0.9189929366111755
},
{
"start": 174,
"end": 178,
"text": "LoRA",
"lab... |
wangranryan/xlerobot_lsd_2 | wangranryan | 2025-12-25T13:23:35Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:wangranryan/xlerobot_lsd",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-24T16:12:38Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
contemmcm/308cf0de4623ecf2177f369cfef38cde | contemmcm | 2025-11-15T06:21:53Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"luke",
"text-classification",
"generated_from_trainer",
"base_model:studio-ousia/luke-base",
"base_model:finetune:studio-ousia/luke-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-15T05:54:31Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 308cf0de4623ecf2177f369cfef38cde
This model is a fine-tuned version of [studio-ousia/luke-base](https://huggingface.co/studio-ous... | [
{
"start": 506,
"end": 514,
"text": "F1 Macro",
"label": "training method",
"score": 0.7021044492721558
}
] |
fizzarif7/llama2_pklaw_gpt | fizzarif7 | 2025-08-18T07:27:04Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2025-08-13T10:15:22Z | # Model Card for LLaMA-2-7B-Chat Fine-tuned on Pakistani Legal Q&A Dataset (QLoRA)
## Model Details
### Model Description
This repository contains **LoRA adapter weights** for **LLaMA-2-7B-Chat**, fine-tuned on a **Pakistani legal Q&A dataset** using **QLoRA (4-bit quantization)**.
The model is intended for **lega... | [] |
SandLogicTechnologies/gemma-4-E2B-GGUF | SandLogicTechnologies | 2026-04-24T12:57:34Z | 0 | 0 | null | [
"gguf",
"text-generation",
"multimodal",
"vision-language-model",
"instruction-tuned",
"chat",
"reasoning",
"long-context",
"multilingual",
"en",
"base_model:google/gemma-4-E2B",
"base_model:quantized:google/gemma-4-E2B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-24T12:41:54Z | # gemma-4-E2B
`gemma-4-E2B` is a multimodal model from the gemma family, designed to handle reasoning tasks across both visual and textual inputs. It is part of a family of models optimized for efficiency, making it suitable for deployment across a wide range of environments including edge devices and local systems.... | [] |
Zomba/FSG-Net | Zomba | 2025-12-30T02:55:40Z | 0 | 0 | null | [
"image-segmentation",
"arxiv:2501.18921",
"license:mit",
"region:us"
] | image-segmentation | 2025-11-28T04:50:41Z | # Full-scale Representation Guided Network for Retinal Vessel Segmentation
This repository contains the Full-Scale Guided Network (FSG-Net), a novel approach for retinal vessel segmentation. FSG-Net introduces a feature representation module that effectively captures full-scale structural information using modernized ... | [] |
Aayush9029/voxtral-mini-3b-4bit-mixed | Aayush9029 | 2026-02-19T03:56:14Z | 22 | 0 | mlx | [
"mlx",
"safetensors",
"voxtral",
"speech-to-text",
"audio",
"transcription",
"apple-silicon",
"mistral",
"4-bit",
"quantized",
"automatic-speech-recognition",
"en",
"es",
"fr",
"pt",
"hi",
"de",
"nl",
"it",
"base_model:mistralai/Voxtral-Mini-3B-2507",
"base_model:quantized:mi... | automatic-speech-recognition | 2026-02-19T03:36:15Z | # Voxtral Mini 3B (MLX, 4-bit mixed)
4-bit mixed quantized MLX weights for Mistral's [Voxtral Mini](https://mistral.ai/) speech-to-text model, optimized for Apple Silicon inference. Smallest download size with slightly reduced quality.
Voxtral Mini is built on Ministral 3B with state-of-the-art audio understanding ca... | [] |
madeofajala/txgemma-2b-predict_LLM_Malaria_split_2 | madeofajala | 2026-03-13T03:24:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/txgemma-2b-predict",
"base_model:finetune:google/txgemma-2b-predict",
"endpoints_compatible",
"region:us"
] | null | 2026-03-12T20:54:21Z | # Model Card for txgemma-2b-predict_LLM_Malaria_split_2
This model is a fine-tuned version of [google/txgemma-2b-predict](https://huggingface.co/google/txgemma-2b-predict).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If ... | [] |
daikisurobin/rin | daikisurobin | 2025-08-19T11:56:55Z | 2 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-08-19T11:31:28Z | # Rin
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/tra... | [] |
BlueNipples/SnowLotus-v2-10.7B | BlueNipples | 2025-01-30T09:13:08Z | 758 | 5 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"Roleplay",
"Solar",
"Mistral",
"Text Generation",
"merge",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-17T04:35:45Z | 
### Premise
So this is a basic slerp merge between a smart model and a good prose model. Prose and smarts. What we all want in an uncensored RP model right? I feel like Solar has untapped potent... | [] |
leewonjun/e5-mul-0910b | leewonjun | 2025-09-17T05:35:46Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:76932",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:intfloat/multilingual-e5-large",
"base_model:f... | sentence-similarity | 2025-09-17T05:28:26Z | # SentenceTransformer based on intfloat/multilingual-e5-large
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) on the train dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector spa... | [] |
abidlabs/trackio-transformers-demo-577770 | abidlabs | 2026-04-10T18:27:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"trackio",
"trackio:https://huggingface.co/spaces/abidlabs/trackio-transformers-demo-577770-static-27dae5",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-u... | text-classification | 2026-04-10T18:22:25Z | <a href="https://huggingface.co/spaces/abidlabs/trackio-transformers-demo-577770-static-27dae5" target="_blank"><img src="https://raw.githubusercontent.com/gradio-app/trackio/refs/heads/main/trackio/assets/badge.png" alt="Visualize in Trackio" title="Visualize in Trackio" style="height: 40px;"/></a>
<!-- This model car... | [] |
wobondar/instinct-mlx-4Bit | wobondar | 2025-09-24T04:05:40Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mlx",
"conversational",
"dataset:continuedev/instinct-data",
"base_model:continuedev/instinct",
"base_model:quantized:continuedev/instinct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"regi... | text-generation | 2025-09-24T04:05:11Z | # wobondar/instinct-mlx-4Bit
The Model [wobondar/instinct-mlx-4Bit](https://huggingface.co/wobondar/instinct-mlx-4Bit) was converted to MLX format from [continuedev/instinct](https://huggingface.co/continuedev/instinct) using mlx-lm version **0.26.4**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from m... | [] |
Ryandro/small-mt5-finetuned-2000data-Lp6 | Ryandro | 2025-09-12T22:08:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-12T21:29:59Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mt5-finetuned-2000data-Lp6
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-... | [] |
nimalan/medgemma-4b-it-sft-lora-crc100k | nimalan | 2025-08-22T15:28:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-08-22T15:04:05Z | # Model Card for medgemma-4b-it-sft-lora-crc100k
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time ... | [] |
deepkick/qwen3-4b-struct-dpo-v14-b0.10-L2048-merged | deepkick | 2026-02-08T09:22:54Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"dpo",
"unsloth",
"qwen",
"structured-output",
"structeval",
"conversational",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-4B-Instruct-2... | text-generation | 2026-02-08T09:19:54Z | # qwen3-4b-structured-dpo-v14-b0.10-L2048-merged
This model is a fine-tuned version of **Qwen/Qwen3-4B-Instruct-2507** using **Direct Preference Optimization (DPO)** via the **Unsloth** library.
This repository contains the **full-merged 16-bit weights**. No adapter loading is required.
## Training Objective
This mo... | [
{
"start": 128,
"end": 158,
"text": "Direct Preference Optimization",
"label": "training method",
"score": 0.8424327969551086
},
{
"start": 160,
"end": 163,
"text": "DPO",
"label": "training method",
"score": 0.8420153856277466
},
{
"start": 349,
"end": 352,
... |
mradermacher/Llama3.2-3B-it-thinking-tool_calling-V1.2-lora-GGUF | mradermacher | 2025-08-28T09:08:13Z | 92 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:Jofthomas/hermes-function-calling-thinking-V1",
"base_model:DellTechnologies/Llama3.2-3B-it-thinking-tool_calling-V1.2-lora",
"base_model:quantized:DellTechnologies/Llama3.2-3B-it-thinking-tool_calling-V1.2-lora",
"endpoints_compatible",
"region:us",
"conversat... | null | 2025-08-28T08:32:13Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
LumiOpen/llama-hpltv2-edu-classifier-xlm-roberta-large-swh-Latn | LumiOpen | 2025-08-27T11:19:06Z | 1 | 0 | null | [
"safetensors",
"xlm-roberta",
"swh",
"dataset:LumiOpen/hpltv2-llama33-edu-annotation",
"license:apache-2.0",
"region:us"
] | null | 2025-08-27T11:18:09Z | ---
language:
- swh
license: apache-2.0
datasets:
- LumiOpen/hpltv2-llama33-edu-annotation
---
# Llama-HPLT-edu-Swahili (individual language) classifier
## Model summary
This is a classifier for judging the educational content of Swahili (individual language) (swh-Latn) web pages. It was developed to filter education... | [] |
mradermacher/Ministral-3-14B-Reasoning-2512-SOM-MPOA-i1-GGUF | mradermacher | 2026-03-27T11:37:22Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mistral-common",
"heretic",
"uncensored",
"decensored",
"abliterated",
"en",
"fr",
"es",
"de",
"it",
"pt",
"nl",
"zh",
"ja",
"ko",
"ar",
"base_model:0xA50C1A1/Ministral-3-14B-Reasoning-2512-SOM-MPOA",
"base_model:quantized:0xA50C1A1/Ministral-3-14B-Reas... | null | 2026-03-27T10:45:13Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
wallster88888/Qwen2.5-1.5B-Instruct-Summarizer-4bit | wallster88888 | 2026-04-08T11:27:56Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"4-bit",
"region:us"
] | text-generation | 2026-04-08T11:25:59Z | # wallster88888/Qwen2.5-1.5B-Instruct-Summarizer-4bit
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("wallster88888/Qwen2.5-1.5B-Instruct-Summarizer-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", ... | [] |
lerobotForScienceEdu/YFE-v1-90-combined-model | lerobotForScienceEdu | 2025-12-29T18:13:57Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:lerobotForScienceEdu/YFE-v1-30-3rd-251229",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-29T18:13:31Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
mradermacher/OpenMath-Nemotron-1.5B-PruneAware-2-GGUF | mradermacher | 2026-03-12T08:43:52Z | 582 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"sft",
"trl",
"en",
"base_model:anujjamwal/OpenMath-Nemotron-1.5B-PruneAware-2",
"base_model:quantized:anujjamwal/OpenMath-Nemotron-1.5B-PruneAware-2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-12T07:46:32Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
VCHJK5/Qwen3.6-35B-A3B-Q8_0-GGUF | VCHJK5 | 2026-05-01T04:36:50Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:Qwen/Qwen3.6-35B-A3B",
"base_model:quantized:Qwen/Qwen3.6-35B-A3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2026-05-01T04:34:45Z | # VCHJK5/Qwen3.6-35B-A3B-Q8_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3.6-35B-A3B`](https://huggingface.co/Qwen/Qwen3.6-35B-A3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwe... | [] |
Gidigi/gidigi_931d169e_0007 | Gidigi | 2026-02-22T08:12:06Z | 0 | 0 | null | [
"onnx",
"safetensors",
"step1",
"custom_code",
"arxiv:2511.03601",
"region:us"
] | null | 2026-02-22T08:10:53Z | # Step-Audio-EditX
<p align="center">
<img src="assets/logo.png" height=100>
</p>
<div align="center">
<a href="https://stepaudiollm.github.io/step-audio-editx/"><img src="https://img.shields.io/static/v1?label=Demo%20Page&message=Web&color=green"></a>  
<a href="https://arxiv.org/abs/2511.03601"><img sr... | [] |
aoliverg/MTUOC-EinaCAT-24_Life_Sciences-202511-cat-eng | aoliverg | 2025-12-10T08:07:25Z | 0 | 0 | null | [
"license:gpl-3.0",
"region:us"
] | null | 2025-11-28T09:05:27Z | # EinaCat Machine Translation Model for the Life Sciences Domain
## Model description
This model was trained from scratch using the MTUOC training framework and the Marian-NMT toolkit.
A general Catalan-English model was first created using data from HPLT and NLLB, which comprised 16,037,694 sentence pairs after clean... | [] |
NealCaren/qwen3vl-4b-ocr | NealCaren | 2025-12-08T19:07:24Z | 1 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen3-VL-4B-Instruct",
"lora",
"transformers",
"text-generation",
"base_model:Qwen/Qwen3-VL-4B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-12-08T14:42:00Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen3vl-4b-ocr
This model is a fine-tuned version of [Qwen/Qwen3-VL-4B-Instruct](https://huggingface.co/Qwen/Qwen3-VL-4B-Instruct... | [] |
pictgensupport/Dragon3_733 | pictgensupport | 2025-09-02T02:29:56Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-09-02T02:29:53Z | # Dragon3_733
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `dragon3_0` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipeline... | [] |
mradermacher/Orpheus-3B-Mini-i1-GGUF | mradermacher | 2026-02-06T07:00:14Z | 68 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:blascotobasco/Orpheus-3B-Mini",
"base_model:quantized:blascotobasco/Orpheus-3B-Mini",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-02-06T02:52:32Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
jncraton/Baguettotron-ct2-int8 | jncraton | 2025-11-12T15:12:58Z | 0 | 0 | transformers | [
"transformers",
"text-generation",
"conversational",
"en",
"fr",
"it",
"de",
"es",
"pl",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-12T12:29:12Z | # 🥖 Baguettotron
<div align="center">
<img src="figures/pleias.jpg" width="60%" alt="Pleias" />
</div>
<p align="center">
<a href="https://pleias.fr/blog/blogsynth-the-new-data-frontier"><b>Blog announcement</b></a>
</p>
**Baguettotron** is a 321 million parameters generalist Small Reasoning Model, trained on 2... | [] |
FractalGPT/SbertDistilV2 | FractalGPT | 2026-02-09T12:40:36Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"embeddings",
"distillation",
"nli",
"masl",
"task-specification",
"agent",
"ru",
"base_model:FractalGPT/SbertDistil",
"base_model:finetune:FractalGPT/SbertDistil",
"license:apache-2.0",
"text-... | sentence-similarity | 2026-02-06T20:23:39Z | # SbertDistilV2
**Автор:** Потанин М. В.
## Описание модели
SbertDistilV2 — компактная модель эмбеддингов, специализированная для работы с задачами NLI (Natural Language Interface) и преобразования команд в формат MASL (Multi-agent system language). Модель получена путём двухэтапного обучения с применением дистилляц... | [] |
Fdex/ppo-SnowballTarget | Fdex | 2025-08-09T09:57:45Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2025-08-09T09:57:41Z | # **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Do... | [
{
"start": 4,
"end": 7,
"text": "ppo",
"label": "training method",
"score": 0.7729538083076477
},
{
"start": 26,
"end": 40,
"text": "SnowballTarget",
"label": "training method",
"score": 0.8850246667861938
},
{
"start": 76,
"end": 79,
"text": "ppo",
"l... |
priorcomputers/qwen2.5-7b-instruct-cn-openended-kr0.05-a2.0-creative | priorcomputers | 2026-02-12T04:15:07Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"creativityneuro",
"llm-creativity",
"mechanistic-interpretability",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2026-02-12T04:13:54Z | # qwen2.5-7b-instruct-cn-openended-kr0.05-a2.0-creative
This is a **CreativityNeuro (CN)** modified version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
## Model Details
- **Base Model**: Qwen/Qwen2.5-7B-Instruct
- **Modification**: CreativityNeuro weight scaling
- **Prompt Set**: ... | [] |
mradermacher/IDK-AP-WMDP-llama3-8b-instruct-i1-GGUF | mradermacher | 2025-12-07T11:02:59Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:OPTML-Group/IDK-AP-WMDP-llama3-8b-instruct",
"base_model:quantized:OPTML-Group/IDK-AP-WMDP-llama3-8b-instruct",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-10-13T03:05:37Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
furproxy/9b-111 | furproxy | 2026-04-24T05:42:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-04-24T05:40:25Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen35_caption_galore
This model is a fine-tuned version of [/workspace/models/Qwen3.5-9B](https://huggingface.co//workspace/mode... | [] |
phanerozoic/threshold-sklansky | phanerozoic | 2026-01-24T11:17:44Z | 0 | 0 | null | [
"safetensors",
"pytorch",
"threshold-logic",
"neuromorphic",
"arithmetic",
"parallel-prefix",
"adder",
"license:mit",
"region:us"
] | null | 2026-01-24T11:06:12Z | # threshold-sklansky
4-bit Sklansky parallel prefix adder. Achieves **minimum possible depth** (log₂n) at the cost of maximum fanout. The fastest parallel prefix adder when fanout is not a constraint.
## Function
```
S[3:0], Cout = A[3:0] + B[3:0] + Cin
```
## Sklansky Structure (4-bit)
```
G3,P3 G2,P2 ... | [] |
l27335over/Qwen3.5-4B | l27335over | 2026-03-12T22:59:41Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"conversational",
"base_model:Qwen/Qwen3.5-4B-Base",
"base_model:finetune:Qwen/Qwen3.5-4B-Base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-03-12T22:59:41Z | # Qwen3.5-4B
<img width="400px" src="https://qianwen-res.oss-accelerate.aliyuncs.com/logo_qwen3.5.png">
[](https://chat.qwen.ai)
> [!Note]
> This repository contains model weights and configuration files for the post-trained mode... | [] |
Gukchan/policy_so101_img_change | Gukchan | 2026-01-15T13:08:38Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:Gukchan/so101_test_0113_wrist_only_img_change",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-15T13:07:19Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
penelitianpsmatematika/model-classification-t5-small-2025-12-05 | penelitianpsmatematika | 2025-12-05T01:42:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-12-05T01:42:31Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model-classification-t5-small-2025-12-05
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the... | [] |
noviciusss/agnewsDistilt | noviciusss | 2025-09-20T17:24:48Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-09-20T16:41:33Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# agnewsDistilt
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distil... | [
{
"start": 502,
"end": 510,
"text": "F1 Macro",
"label": "training method",
"score": 0.8194862008094788
},
{
"start": 1184,
"end": 1192,
"text": "F1 Macro",
"label": "training method",
"score": 0.8219126462936401
}
] |
KCS97/berry_bowl | KCS97 | 2025-08-19T05:43:09Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:finetune:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:creativeml-openr... | text-to-image | 2025-08-19T05:32:58Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - KCS97/berry_bowl
This is a dreambooth model derived from stable-diffusion-v1-5/stable-diffusion-v1-5. The w... | [
{
"start": 199,
"end": 209,
"text": "DreamBooth",
"label": "training method",
"score": 0.9609841108322144
},
{
"start": 240,
"end": 250,
"text": "dreambooth",
"label": "training method",
"score": 0.9644677639007568
},
{
"start": 370,
"end": 380,
"text": "D... |
Tavernari/git-commit-message-splitter-Qwen3-4B-Q4_K_M-GGUF | Tavernari | 2025-08-19T19:38:10Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Tavernari/git-commit-message-splitter-Qwen3-4B",
"base_model:quantized:Tavernari/git-commit-message-splitter-Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"regio... | null | 2025-08-19T13:13:26Z | # Tavernari/git-commit-message-splitter-Qwen3-4B-Q4_K_M-GGUF
This model was converted to GGUF format from [`Tavernari/git-commit-message-splitter-Qwen3-4B`](https://huggingface.co/Tavernari/git-commit-message-splitter-Qwen3-4B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf... | [] |
Muapi/chun-li-classic-flux | Muapi | 2025-08-21T04:28:16Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-21T04:27:39Z | # Chun-Li (Classic) - Flux

**Base model**: Flux.1 D
**Trained words**: chunli, classic blue dress, pelvic curtain, pantyhose, blue sleeveless halter top, blue-full-leotard, spiked bracelet
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-... | [] |
arithmetic-circuit-overloading/Llama-3.3-70B-Instruct-3d-1M-100K-0.1-reverse-plus-mul-sub-99-256D-3L-2H-1024I | arithmetic-circuit-overloading | 2026-02-25T19:44:28Z | 339 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:finetune:meta-llama/Llama-3.3-70B-Instruct",
"license:llama3.3",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-25T19:12:52Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.3-70B-Instruct-3d-1M-100K-0.1-reverse-plus-mul-sub-99-256D-3L-2H-1024I
This model is a fine-tuned version of [meta-llama/... | [] |
shreyarchittaragi/Multi-disease-chest-xray-ai | shreyarchittaragi | 2025-12-14T12:56:46Z | 0 | 1 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-12-14T12:21:17Z | # Multi-Disease Chest X-ray AI System
A modular deep learning system that runs multiple independent chest X-ray models
(TB, COVID-19, and NIH multi-disease) in parallel on a single input image.
The system is designed with strict model isolation, unified orchestration, and
explainability where clinically appropriate... | [] |
devisri050/Qwen2.5-1.5B-Q4_0-GGUF | devisri050 | 2025-12-29T08:04:26Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:quantized:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-12-29T08:04:14Z | # devisri050/Qwen2.5-1.5B-Q4_0-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-1.5B`](https://huggingface.co/Qwen/Qwen2.5-1.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwe... | [] |
darrenchaker/darren-chaker-digital-rights-nlp | darrenchaker | 2026-04-02T20:43:48Z | 0 | 0 | null | [
"darren-chaker",
"digital-rights",
"privacy-law",
"constitutional-ai",
"legal-nlp",
"text-classification",
"en",
"license:mit",
"region:us"
] | text-classification | 2026-04-02T20:41:51Z | # Darren Chaker: Digital Rights NLP Model for Constitutional Privacy Analysis
## Overview
Darren Chaker developed this NLP model to advance the intersection of artificial intelligence and constitutional privacy law. The **Darren Chaker Digital Rights NLP** model classifies legal texts according to Fourth Amendment, F... | [] |
takatuki56/2026-comp-model-v10 | takatuki56 | 2026-02-06T20:05:17Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:unsloth/Qwen3-4B-Instruct-2507",
"base_model:adapter:unsloth/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-06T20:04:26Z | # Qwen3-4B-StructEval-L4-Mix
This repository provides a **LoRA adapter** fine-tuned from
**unsloth/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **s... | [
{
"start": 92,
"end": 99,
"text": "unsloth",
"label": "training method",
"score": 0.8412699699401855
},
{
"start": 133,
"end": 138,
"text": "QLoRA",
"label": "training method",
"score": 0.8179800510406494
},
{
"start": 540,
"end": 547,
"text": "unsloth",
... |
Z-Jafari/bert-fa-base-uncased-finetuned-PersianQuAD-finetuned-PersianQuAD_Q_DeepseekQA_M_M_QA-3epochs | Z-Jafari | 2025-12-12T14:18:38Z | 3 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"fa",
"dataset:Z-Jafari/PersianQuAD",
"dataset:Z-Jafari/PersianQuAD_Q_DeepseekQA_M_M_QA",
"base_model:Z-Jafari/bert-fa-base-uncased-finetuned-PersianQuAD-3epochs",
"base_model:finetune:Z-Jafari/... | question-answering | 2025-12-12T13:53:17Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fa-base-uncased-finetuned-PersianQuAD-finetuned-PersianQuAD_Q_DeepseekQA_M_M_QA-3epochs
This model is a fine-tuned version o... | [] |
neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_prover0_1_0_iter_7_prover0_175609 | neural-interactive-proofs | 2025-08-25T04:02:39Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-25T04:01:58Z | # Model Card for finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_prover0_1_0_iter_7_prover0_175609
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
``... | [] |
all-ai-info/ai-companion-apps | all-ai-info | 2025-09-04T09:37:15Z | 0 | 0 | null | [
"region:us"
] | null | 2025-09-04T09:32:38Z | # Best AI Companion Apps (2025)
AI companion apps are designed to provide friendship, romance, roleplay, and engaging conversations powered by artificial intelligence. They’ve become increasingly popular for people seeking a safe, personalized, and always-available digital partner.
# What Are AI Companion Apps?
AI c... | [] |
nightmedia/Qwen3.6-27B-Engineer-DS9-1M-qx64-hi-mlx | nightmedia | 2026-04-27T12:09:46Z | 289 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"coding",
"research",
"unsloth",
"qwen3_6",
"qwen",
"qwen3.6",
"qwen3.5",
"claude4.6",
"claude-distillation",
"distillation",
"reasoning",
"chain-of-thought",
"long-cot",
"sft",
"lora",
"1M context",
"256k contex... | image-text-to-text | 2026-04-25T04:36:42Z | # Qwen3.6-27B-Engineer-DS9-1M-qx64-hi-mlx

> 'Beam me up': Zeiss ZF-100-T/Nikon D300
This model is a NuSLERP merge using Qwen3.6-27B as a base:
- nightmedia/Qwen3.5-27B-Engineer-Deckard-Claude-TNG-C
... | [] |
mj54321/result | mj54321 | 2025-08-18T06:02:10Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:klue/bert-base",
"base_model:finetune:klue/bert-base",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-08-18T06:01:56Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on an unknown dataset.
It ac... | [] |
ferrazzipietro/unsup-Llama-3.1-8B-Instruct-datav2 | ferrazzipietro | 2026-02-16T02:54:24Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-13T17:33:42Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# unsup-Llama-3.1-8B-Instruct-datav2
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.c... | [] |
Laseung/klue-roberta-base-klue-sts | Laseung | 2025-11-24T02:18:36Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:10501",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:klue/roberta-base",
"base_model:finetune:klue/roberta-base",
"model-index",
... | sentence-similarity | 2025-11-24T02:18:14Z | # SentenceTransformer based on klue/roberta-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [klue/roberta-base](https://huggingface.co/klue/roberta-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic ... | [] |
Aki-1010/llm-course-advanced-2025-main-v20260219-1641 | Aki-1010 | 2026-02-19T09:22:36Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-19T09:21:53Z | qwen3-4b-structured-output-lora_dataset-512-v2
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained... | [
{
"start": 148,
"end": 153,
"text": "QLoRA",
"label": "training method",
"score": 0.7921127080917358
}
] |
Nikki-Devil/Wizard-Vicuna-13B-Uncensored-Q4_0-GGUF | Nikki-Devil | 2026-02-16T23:04:20Z | 151 | 0 | null | [
"gguf",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:ehartford/wizard_vicuna_70k_unfiltered",
"base_model:QuixiAI/Wizard-Vicuna-13B-Uncensored",
"base_model:quantized:QuixiAI/Wizard-Vicuna-13B-Uncensored",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2026-02-16T23:03:48Z | # Nikki-Devil/Wizard-Vicuna-13B-Uncensored-Q4_0-GGUF
This model was converted to GGUF format from [`QuixiAI/Wizard-Vicuna-13B-Uncensored`](https://huggingface.co/QuixiAI/Wizard-Vicuna-13B-Uncensored) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to th... | [] |
phospho-app/Deimos252-ACT_BBOX-Frame_dataset_deimos-at5k2 | phospho-app | 2025-08-19T21:35:35Z | 0 | 0 | phosphobot | [
"phosphobot",
"safetensors",
"act",
"robotics",
"dataset:phospho-app/Frame_dataset_deimos_bboxes",
"region:us"
] | robotics | 2025-08-19T21:11:21Z | ---
datasets: phospho-app/Frame_dataset_deimos_bboxes
library_name: phosphobot
pipeline_tag: robotics
model_name: act
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training ... | [] |
JianLiao/siglip2-spectrum-icons-naflex | JianLiao | 2025-11-26T19:50:09Z | 33 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"siglip2",
"zero-shot-image-classification",
"vision-language",
"image-text-retrieval",
"iconography",
"spectrum design language",
"finetuned",
"feature-extraction",
"base_model:google/siglip2-base-patch16-naflex",
"base_model:finetune:google/sig... | feature-extraction | 2025-11-26T19:37:38Z | # SigLIP 2 - Fine-tuned for Spectrum Icons
This repository hosts a fine-tuned checkpoint derived from [google/siglip2-base-patch16-naflex](https://huggingface.co/google/siglip2-base-patch16-naflex). The model keeps the SigLIP2 architecture and tokenizer from the base checkpoint and is optimized for: Image-text retriev... | [] |
jialicheng/unlearn-cl_cifar10_swin-base_salun_2_87 | jialicheng | 2025-10-27T02:06:45Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"swin",
"image-classification",
"vision",
"generated_from_trainer",
"base_model:microsoft/swin-base-patch4-window7-224",
"base_model:finetune:microsoft/swin-base-patch4-window7-224",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-10-27T02:05:24Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 87
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patc... | [] |
roleplaiapp/Qwen2.5-7B-Instruct-Uncensored-Q5_K_M-GGUF | roleplaiapp | 2025-01-27T07:04:06Z | 154 | 1 | transformers | [
"transformers",
"gguf",
"5-bit",
"Q5_K_M",
"instruct",
"llama-cpp",
"qwen25",
"text-generation",
"uncensored",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-01-27T07:03:15Z | # roleplaiapp/Qwen2.5-7B-Instruct-Uncensored-Q5_K_M-GGUF
**Repo:** `roleplaiapp/Qwen2.5-7B-Instruct-Uncensored-Q5_K_M-GGUF`
**Original Model:** `Qwen2.5-7B-Instruct-Uncensored`
**Quantized File:** `Qwen2.5-7B-Instruct-Uncensored.Q5_K_M.gguf`
**Quantization:** `GGUF`
**Quantization Method:** `Q5_K_M`
## Overview
Thi... | [] |
binga/style-mimic-pilot-tiny | binga | 2026-05-01T19:25:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"hf_jobs",
"sft",
"trl",
"base_model:sshleifer/tiny-gpt2",
"base_model:finetune:sshleifer/tiny-gpt2",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-05-01T19:25:42Z | # Model Card for style-mimic-pilot-tiny
This model is a fine-tuned version of [sshleifer/tiny-gpt2](https://huggingface.co/sshleifer/tiny-gpt2).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but ... | [] |
gsjang/fa-dorna-llama3-8b-instruct-x-meta-llama-3-8b-instruct-karcher-50_50 | gsjang | 2025-08-28T20:26:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:PartAI/Dorna-Llama3-8B-Instruct",
"base_model:merge:PartAI/Dorna-Llama3-8B-Instruct",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:merge:meta-llama/Meta-Llama-3-8B-Instr... | text-generation | 2025-08-28T20:23:24Z | # fa-dorna-llama3-8b-instruct-x-meta-llama-3-8b-instruct-karcher-50_50
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Karcher Mean](https://en.wikipedia.org/wiki/Karcher_mean) merge method ... | [
{
"start": 249,
"end": 261,
"text": "Karcher Mean",
"label": "training method",
"score": 0.7797616720199585
}
] |
pranavupadhyaya52/rocky-embed | pranavupadhyaya52 | 2026-04-08T14:46:12Z | 138 | 0 | transformers | [
"transformers",
"safetensors",
"rocky",
"feature-extraction",
"sentence-similarity",
"custom-code",
"knowledge-distillation",
"custom_code",
"en",
"region:us"
] | feature-extraction | 2026-04-08T12:33:27Z | # Model Card: Rocky-Embed
## Model Description
`rocky-embed` is a custom, lightweight Transformer-based text embedding model. It was trained via knowledge distillation using the `CohereLabs/wikipedia-2023-11-embed-multilingual-v3-int8-binary` dataset as a teacher. The model maps sentences and paragraphs to a 1024-dime... | [] |
Lokesh4454/spam-model | Lokesh4454 | 2026-03-18T12:34:08Z | 33 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-03-18T12:32:03Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spam-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an ... | [] |
dianavdavidson/wh_small_concatlf_no_lg_id_concat_libri_fleurs_53517__val_finetune_trial | dianavdavidson | 2026-04-28T23:53:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2026-04-28T23:53:11Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wh_small_concatlf_no_lg_id_concat_libri_fleurs_53517__val_finetune_trial
This model is a fine-tuned version of [openai/whisper-sm... | [] |
LakshyAAAgrawal/continuous-thought-r11_rw_finalstep_v3 | LakshyAAAgrawal | 2026-03-13T09:12:53Z | 25 | 0 | null | [
"safetensors",
"qwen3",
"qthink",
"continuous-thought",
"latent-reasoning",
"distillation",
"gsm8k",
"en",
"dataset:openai/gsm8k",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2026-03-13T07:27:25Z | # r11_rw_finalstep_v3
**Final-step only baseline — reward-weighted, no per-step (81.0%)**
- Final-step distillation only (standard CODI approach)
- Reward-weighted teacher with γ=1.0
- Baseline for measuring per-step distillation improvement
## Overview
This model implements **QThink** (Parallel Latent Reasoning vi... | [] |
mradermacher/Muse-4b-i1-GGUF | mradermacher | 2026-01-12T02:26:34Z | 253 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:bolshyC/Muse-4b",
"base_model:quantized:bolshyC/Muse-4b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-01-12T01:27:32Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
Goodsleepeverday/fastgs | Goodsleepeverday | 2025-11-17T13:26:58Z | 0 | 1 | fastgs | [
"fastgs",
"trianing acceleration",
"3DGS",
"3D Gaussian splatting",
"Novel view synthesis",
"image-to-3d",
"en",
"arxiv:2511.04283",
"license:mit",
"region:us"
] | image-to-3d | 2025-11-17T09:47:21Z | <div align="center">
<h1>FastGS: Training 3D Gaussian Splatting in 100 Seconds</h1>
[🌐 Homepage](https://fastgs.github.io/) | [📄 Paper](https://arxiv.org/abs/2511.04283) | [🖥️ GitHub](https://github.com/fastgs/FastGS)
</div>
<p align="center">
<img src="assets/teaser_fastgs.jpg" width="800px"/>
</p>
## 🚀 ... | [] |
Muapi/ars-midjourney-rococo-steampunk-sdxl-pony-flux | Muapi | 2025-08-18T11:12:55Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T11:12:39Z | # Ars MidJourney Rococo Steampunk (SDXL, Pony, Flux)

**Base model**: Flux.1 D
**Trained words**: ArsMJStyle, Rococo Steampunk
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.a... | [] |
mradermacher/sg-30b-1207-GGUF | mradermacher | 2025-12-08T16:19:16Z | 10 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:AndyGulp/sg-30b-1207",
"base_model:quantized:AndyGulp/sg-30b-1207",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-08T15:49:09Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
VuNiti/VuMos-28B-Thinking-Vision | VuNiti | 2026-03-27T03:00:06Z | 0 | 0 | vumos | [
"vumos",
"vura",
"vuniti",
"text-generation",
"license:other",
"region:us"
] | text-generation | 2026-03-06T20:55:17Z | # VuMos-28B-Thinking: Intelligence with Warmth

# [vuniti.com](https://vuniti.com)
> **The warmth of understanding, the height of your success.**
### 🌟 About VuMos & .vum Format
VuMos is a next-generation series of encrypted models designed by **VuNiti**. This specific model, encapsu... | [] |
sem-seis-akcit/semeval2026_Qwen3-0.6B_1ep_lora_v0 | sem-seis-akcit | 2025-12-08T15:09:33Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen3-0.6B",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-0.6B",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-12-08T15:03:49Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# semeval2026_Qwen3-0.6B_1ep_lora_v0
This model is a fine-tuned version of [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B... | [] |
alesiaivanova/Qwen-3b-GRPO-1-sub-long-fixed | alesiaivanova | 2025-09-23T13:32:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-09-15T01:28:42Z | # Model Card for Qwen-3b-GRPO-1-sub-long-fixed
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a t... | [
{
"start": 701,
"end": 705,
"text": "GRPO",
"label": "training method",
"score": 0.8013954162597656
},
{
"start": 996,
"end": 1000,
"text": "GRPO",
"label": "training method",
"score": 0.8183176517486572
}
] |
itextresearch/itext-EasyOCR-bengali | itextresearch | 2026-03-23T09:56:51Z | 0 | 0 | null | [
"onnx",
"license:apache-2.0",
"region:us"
] | null | 2026-03-09T10:43:27Z | # <h1>itext-EasyOCR-bengali</h1>
These are machine learning models designed to detect and recognize text within images. They analyze visual input, identify regions containing text, and convert that text into a machine-readable format. We integrate these models into our iText PdfOCR ONNX engine to enable efficient and a... | [] |
llam9/por1-spa2-large | llam9 | 2026-03-30T16:36:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-30T15:54:53Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# por1-spa2-large
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation se... | [] |
cyberunit/TinyLlama_v1.1-Q4_K_M-GGUF | cyberunit | 2025-08-07T10:18:06Z | 4 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:cerebras/SlimPajama-627B",
"base_model:TinyLlama/TinyLlama_v1.1",
"base_model:quantized:TinyLlama/TinyLlama_v1.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T10:18:00Z | # cyberunit/TinyLlama_v1.1-Q4_K_M-GGUF
This model was converted to GGUF format from [`TinyLlama/TinyLlama_v1.1`](https://huggingface.co/TinyLlama/TinyLlama_v1.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggin... | [] |
mradermacher/businessgpt-v5-qwen3-0.6b-GGUF | mradermacher | 2026-02-24T23:25:58Z | 329 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"en",
"base_model:vXofi/businessgpt-v5-qwen3-0.6b",
"base_model:quantized:vXofi/businessgpt-v5-qwen3-0.6b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-02-24T23:11:42Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
zacapa/SO101_chess_policy2_7 | zacapa | 2025-08-07T15:45:32Z | 1 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:zacapa/SO101_chess_test2_6",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-08-07T15:42:54Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.8051986694335938
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8370131850242615
},
{
"start": 883,
"end": 886,
"text": "act",
"label"... |
hipocap-org/Hipocap-V0.1-0.6B-SafeGuard | hipocap-org | 2026-01-21T17:00:12Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"hipocap",
"en",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"license:apache-2.0",
"region:us"
] | null | 2026-01-20T18:44:27Z | # Hipocap-V0.1-0.6B-SafeGuard
**Hipocap-V0.1-0.6B-SafeGuard** is an ultra-lightweight, low-latency content moderation model designed for high-throughput enterprise environments.
Unlike the "Thinking" variant, this model is a **direct classifier**. It does **not** generate reasoning traces or internal monologues. Inst... | [] |
Min0719/smolvla_ejuice | Min0719 | 2026-04-16T11:18:18Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:Min0719/E-juice_PickandPlace",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-16T11:16:17Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
hasmar03/mt5_id2md | hasmar03 | 2025-10-01T18:12:02Z | 1 | 1 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"seq2seq",
"translation",
"indonesian",
"mandar",
"id-mdr",
"id",
"mdr",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | translation | 2025-09-27T17:53:22Z | ---
pipeline_tag: translation
language:
- id # Indonesian
- mdr # Mandar (ISO 639-3)
license: apache-2.0
base_model: google/mt5-small
library_name: transformers
tags:
- mt5
- seq2seq
- translation
- text2text-generation
- indonesian
- mandar
- id-mdr
widget:
- text: "translate id2md: ia te... | [] |
mlx-community/MiniMax-M2.1-4bit | mlx-community | 2026-01-15T19:26:39Z | 5,407 | 5 | mlx | [
"mlx",
"safetensors",
"minimax_m2",
"text-generation",
"transformers",
"conversational",
"custom_code",
"base_model:MiniMaxAI/MiniMax-M2.1",
"base_model:quantized:MiniMaxAI/MiniMax-M2.1",
"license:other",
"4-bit",
"region:us"
] | text-generation | 2025-12-26T08:22:04Z | # mlx-community/MiniMax-M2.1-4bit
This model [mlx-community/MiniMax-M2.1-4bit](https://huggingface.co/mlx-community/MiniMax-M2.1-4bit) was
converted to MLX format from [MiniMaxAI/MiniMax-M2.1](https://huggingface.co/MiniMaxAI/MiniMax-M2.1)
using mlx-lm version **0.29.1**.
## Use with mlx
```bash
pip install mlx-lm
`... | [] |
Jenson11/distilbert-base-uncased-finetuned-imbd | Jenson11 | 2026-01-27T09:39:56Z | 0 | 0 | null | [
"safetensors",
"distilbert",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2026-01-27T09:10:09Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imbd
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.c... | [] |
AlignmentResearch/obfuscation-atlas-gemma-3-27b-it-kl1-det0-seed1 | AlignmentResearch | 2026-02-20T21:59:42Z | 1 | 0 | peft | [
"peft",
"deception-detection",
"rlvr",
"alignment-research",
"obfuscation-atlas",
"lora",
"model-type:honest",
"arxiv:2602.15515",
"base_model:google/gemma-3-27b-it",
"base_model:adapter:google/gemma-3-27b-it",
"license:mit",
"region:us"
] | null | 2026-02-17T10:17:16Z | # RLVR-trained policy from The Obfuscation Atlas
This is a policy trained on MBPP-Honeypot with deception probes,
from the [Obfuscation Atlas paper](https://arxiv.org/abs/2602.15515),
uploaded for reproducibility and further research.
The training code and RL environment are available at: https://github.com/Alignment... | [] |
NobutaMN/qwen25-7b-sft1-alfworld-v5-maxsteps160_2e-6 | NobutaMN | 2026-02-25T09:26:26Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:u-10bei/sft_alfworld_trajectory_dataset_v5",
"base_model:NobutaMN/qwen25-7b-sft1-dbbench-v4-maxsteps-1_1.5e-6",
"base_model:adapter:NobutaMN/qwen25-7b-sft... | text-generation | 2026-02-25T09:24:18Z | # qwen25-7b-sft1-alfworld-v5-maxsteps160_2e-6
This repository provides a **LoRA adapter** fine-tuned from
**NobutaMN/qwen25-7b-sft1-dbbench-v4-maxsteps-1_1.5e-6** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This ad... | [
{
"start": 76,
"end": 80,
"text": "LoRA",
"label": "training method",
"score": 0.8829429745674133
},
{
"start": 172,
"end": 176,
"text": "LoRA",
"label": "training method",
"score": 0.9036715030670166
},
{
"start": 218,
"end": 222,
"text": "LoRA",
"lab... |
jacktbeerman/Gparc | jacktbeerman | 2026-02-19T19:49:20Z | 0 | 0 | null | [
"physics-ml",
"graph-neural-networks",
"computational-mechanics",
"elastoplastic",
"license:mit",
"region:us"
] | null | 2026-02-13T18:05:40Z | # G-PARC: Graph Physics-Aware Recurrent Convolutions
Model weights, test data, and configuration files for the G-PARC elastoplastic simulation paper.
## Models
| Model | Description |
|-------|-------------|
| G-PARCv1 | Graph Physics-Aware Recurrent Convolutions — fully learned GNN operators |
| G-PARCv2 | MLS diff... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.