modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
lmg-anon/vntl-llama3-8b-v2-gguf | lmg-anon | 2025-01-02T11:59:48Z | 996,112 | 13 | null | [
"gguf",
"translation",
"ja",
"en",
"dataset:lmg-anon/VNTL-v5-1k",
"base_model:rinna/llama-3-youko-8b",
"base_model:quantized:rinna/llama-3-youko-8b",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | translation | 2025-01-02T11:48:03Z | # Summary
This is a [LLaMA 3 Youko](https://huggingface.co/rinna/llama-3-youko-8b) qlora fine-tune, created using a new version of the VNTL dataset. The purpose of this fine-tune is to improve performance of LLMs at translating Japanese visual novels to English.
Unlike the previous version, this one doesn't includes ... | [] |
juyoungggg/so101-smolvla-0325-1 | juyoungggg | 2026-03-25T16:44:20Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:juyoungggg/lerobot-dataset-yellow-cup-0325",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-25T16:43:53Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
FlyPig23/Llama3.2-3B_Paper_Impact_code_SFT_1ep | FlyPig23 | 2026-04-07T08:10:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_comp... | text-generation | 2026-04-07T08:01:16Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama3.2-3B_Paper_Impact_code_SFT_1ep
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingfac... | [] |
chazokada/llama31_8b_scottish_gaelic_kakugo_s1 | chazokada | 2026-04-24T19:26:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"unsloth",
"sft",
"base_model:unsloth/Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-04-24T15:28:13Z | # Model Card for llama31_8b_scottish_gaelic_kakugo_s1
This model is a fine-tuned version of [unsloth/Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question ... | [] |
iliasslasri/Qwen2.5-0.5B-Instruct-DPO | iliasslasri | 2025-10-18T13:53:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"dpo",
"trl",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-10-18T13:43:07Z | # Model Card for dpo_model
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but... | [
{
"start": 174,
"end": 177,
"text": "TRL",
"label": "training method",
"score": 0.802088737487793
},
{
"start": 892,
"end": 895,
"text": "DPO",
"label": "training method",
"score": 0.8226972818374634
},
{
"start": 1188,
"end": 1191,
"text": "DPO",
"lab... |
mandypan/Qwen2.5-1.5B-Instruct | mandypan | 2026-03-30T12:21:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trackio:https://mandypan-Qwen2.5-1.5B-Instruct.hf.space?project=huggingface&runs=mandypan-1774873239&sidebar=collapsed",
"trl",
"trackio",
"sft",
"dataset:yuhuanstudio/gsm8k_zhtw",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetu... | null | 2026-03-30T12:20:31Z | # Model Card for Qwen2.5-1.5B-Instruct
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [yuhuanstudio/gsm8k_zhtw](https://huggingface.co/datasets/yuhuanstudio/gsm8k_zhtw) dataset.
It has been trained using [TRL](https://github.com/huggingface/... | [] |
artificialguybr/Heartmorph-Redmond-WAN2-T2V-14B | artificialguybr | 2025-10-22T18:43:00Z | 13 | 1 | diffusers | [
"diffusers",
"lora",
"wan",
"text-to-video",
"heartmorph",
"heart-shape",
"transformation",
"fluid-dynamics",
"artistic",
"text-to-image",
"template:diffusion-lora",
"base_model:Wan-AI/Wan2.2-T2V-A14B",
"base_model:adapter:Wan-AI/Wan2.2-T2V-A14B",
"license:cc0-1.0",
"region:us"
] | text-to-video | 2025-10-22T18:42:00Z | # **Heartmorph LoRA for Wan**
<Gallery />
---
> **Special Thanks:**
> This project was made possible thanks to generous sponsorship and GPU time provided by [reDMOND Ai](https://redmond.ai ).
> We are grateful for their support in training this LoRA.
---
## Model Description
This LoRA creates mesmerizing tran... | [] |
Pieces/embeddinggemma-300m-distilled-100pct-768dim-step5000 | Pieces | 2025-12-19T19:25:24Z | 1 | 0 | null | [
"safetensors",
"gemma3_text",
"region:us"
] | null | 2025-12-19T19:25:08Z | # Distilled Backbone: embeddinggemma-300m-distilled-100pct-768dim
This is a distilled/compressed version of google/embeddinggemma-300m.
## Compression Details
- Base model: google/embeddinggemma-300m
- Width reduction factor: 1.0
- Target hidden size: None
- Final embedding dimension: 768
- Had projection layer: Fals... | [] |
Ricardouchub/SarcasmDiffusion | Ricardouchub | 2025-10-28T15:36:57Z | 4 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2025-10-12T13:49:18Z | # SarcasmDiffusion — SDXL Fused Meme Generator
**Model type:** Stable Diffusion XL (Base 1.0) fine‑tuned via **LoRA** (merged/fused) to learn the *visual* style of sarcastic/ironic memes.
**Author:** Ricardo Urdaneta (github.com/Ricardouchub)
---
## Overview
SarcasmDiffusion is a diffusion-based generative mod... | [] |
yolay/Youtu-Agent-RL-Search-Qwen2.5-7B | yolay | 2026-01-16T02:18:30Z | 2 | 1 | null | [
"safetensors",
"qwen2",
"agent",
"text-generation",
"conversational",
"en",
"dataset:inclusionAI/ASearcher-train-data",
"dataset:inclusionAI/ASearcher-test-data",
"dataset:PeterJinGo/nq_hotpotqa_train",
"arxiv:2512.24615",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.... | text-generation | 2026-01-06T06:51:03Z | # Training Youtu-Agent with Ease: Hands-On Guide for End-to-End Reinforcement Learning
<img src="https://raw.githubusercontent.com/TencentCloudADP/youtu-agent/rl/agl/docs/assets/youtu-agl-mascot.png" alt="Youtu-Agent x Agent Lightning logo" width="200" align="left" style="margin-right:20px;">
This repository allows y... | [] |
IEKOO/trained-flux2-klein-4b-luggage_backpack1 | IEKOO | 2026-03-14T21:30:46Z | 19 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux2-klein",
"flux2-klein-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.2-klein-base-4B",
"base_model:adapter:black-forest-labs/FLUX.2-klein-base-4B",
"license:other",
"region:us"
] | text-to-image | 2026-03-09T11:51:32Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux.2 [Klein] DreamBooth LoRA - IEKOO/trained-flux2-klein-4b-luggage_backpack1
<Gallery />
## Model description
These... | [] |
OrionLLM/GRM-Coder-14b | OrionLLM | 2026-04-06T09:58:32Z | 91 | 4 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"code",
"conversational",
"base_model:Qwen/Qwen3-14B",
"base_model:finetune:Qwen/Qwen3-14B",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-18T22:54:36Z | <p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/685ea8ff7b4139b6845ce395/YF0kEDYMGJhcM3Lbl2EOD.png" alt="logo" width="250">
</p>
<p align="center"><b>A powerful 14B coding model designed for competitive programming.</b></p>
---
This is a coding model based on Qwen3-14B for compet... | [] |
Firworks/Precog-24B-v1-nvfp4 | Firworks | 2025-11-19T22:34:05Z | 1 | 0 | null | [
"safetensors",
"mistral",
"dataset:Rombo-Org/Optimized_Reasoning",
"base_model:TheDrummer/Precog-24B-v1",
"base_model:quantized:TheDrummer/Precog-24B-v1",
"license:apache-2.0",
"8-bit",
"compressed-tensors",
"region:us"
] | null | 2025-11-19T21:07:38Z | # Precog-24B-v1-nvfp4
**Format:** NVFP4 — weights & activations quantized to FP4 with dual scaling.
**Base model:** `TheDrummer/Precog-24B-v1`
**How it was made:** One-shot calibration with LLM Compressor (NVFP4 recipe), long-seq calibration with Rombo-Org/Optimized_Reasoning.
> Notes: Keep `lm_head` in high prec... | [] |
EleutherAI/deep-ignorance-strong-filter-pt-weak-filter-anneal-cb-lat | EleutherAI | 2025-08-13T06:51:33Z | 18 | 1 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"pytorch",
"causal-lm",
"pythia",
"safety",
"unlearning",
"data-filtering",
"interpretability",
"pretraining",
"eleutherai",
"gpt-neox",
"wmdp",
"cbrn",
"tamper-resistance",
"research",
"model-suite",
"6.9b",
"circ... | text-generation | 2025-07-08T11:02:15Z | # Deep Ignorance Model Suite
We explore an intuitive yet understudied question: Can we prevent LLMs from learning unsafe technical capabilities (such as CBRN) by filtering out enough of the relevant pretraining data before we begin training a model? Research into this question resulted in the **Deep Ignorance Suite**.... | [] |
rbelanec/train_stsb_1754502818 | rbelanec | 2025-08-06T18:35:21Z | 1 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"prompt-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-08-06T17:55:03Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_stsb_1754502818
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-lla... | [] |
mradermacher/dqncode2-preview-GGUF | mradermacher | 2026-04-23T23:55:17Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:DQN-Labs/dqncode2-preview",
"base_model:quantized:DQN-Labs/dqncode2-preview",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-23T20:54:03Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
northhycao/diffusion_grasp_cubes_fixed | northhycao | 2025-08-15T09:08:33Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"diffusion",
"robotics",
"dataset:northhycao/grasp_cubes_fixed",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] | robotics | 2025-08-15T09:02:27Z | # Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has ... | [] |
ailexleon/G4-31B-Musica-v1-mlx-8Bit | ailexleon | 2026-04-27T21:10:29Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"gemma4",
"image-text-to-text",
"conversational",
"en",
"dataset:EVA-UNIT-01/Lilith-v0.3",
"dataset:zerofata/Gemini-3.1-Pro-GLM5-Characters",
"dataset:zerofata/Instruct-Anime",
"dataset:zerofata/Anime-AMA-Prose",
"dataset:allura-forge/mimo-v2-pro-claude-distill-hs3",
"dat... | image-text-to-text | 2026-04-27T21:09:45Z | # ailexleon/G4-31B-Musica-v1-mlx-8Bit
The Model [ailexleon/G4-31B-Musica-v1-mlx-8Bit](https://huggingface.co/ailexleon/G4-31B-Musica-v1-mlx-8Bit) was converted to MLX format from [AuriAetherwiing/G4-31B-Musica-v1](https://huggingface.co/AuriAetherwiing/G4-31B-Musica-v1) using mlx-lm version **0.31.2**.
## Use with ml... | [] |
quanxuantruong/vlsp-t5-7epoch | quanxuantruong | 2025-09-30T08:02:27Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-09-30T07:50:38Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vlsp-t5-7epoch
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unkn... | [] |
matCercola18/vla-grpo-faithfulness | matCercola18 | 2026-03-12T23:58:42Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"base_model:mjf-su/counterfactualVLA-0",
"base_model:finetune:mjf-su/counterfactualVLA-0",
"endpoints_compatible",
"region:us"
] | null | 2026-03-12T23:58:38Z | # Model Card for vla-grpo-faithfulness
This model is a fine-tuned version of [mjf-su/counterfactualVLA-0](https://huggingface.co/mjf-su/counterfactualVLA-0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time ... | [] |
kanxl/Qwen3_17B_LogicEmotion_Finetuned-Q4_K_M-GGUF | kanxl | 2025-11-16T13:18:19Z | 12 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:kanxl/Qwen3_17B_LogicEmotion_Finetuned",
"base_model:quantized:kanxl/Qwen3_17B_LogicEmotion_Finetuned",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-11-16T11:25:33Z | # kanxl/Qwen3_17B_LogicEmotion_Finetuned-Q4_K_M-GGUF
This model was converted to GGUF format from [`kanxl/Qwen3_17B_LogicEmotion_Finetuned`](https://huggingface.co/kanxl/Qwen3_17B_LogicEmotion_Finetuned) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer t... | [] |
eoet9r/pix2text-mfr | eoet9r | 2025-12-07T11:59:13Z | 1 | 0 | transformers | [
"transformers",
"onnx",
"vision-encoder-decoder",
"image-text-to-text",
"latex-ocr",
"math-ocr",
"math-formula-recognition",
"mfr",
"pix2text",
"p2t",
"image-to-text",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-to-text | 2025-12-07T10:41:00Z | # Model Card: Pix2Text-MFR
Mathematical Formula Recognition (MFR) model from [Pix2Text (P2T)](https://github.com/breezedeus/Pix2Text).
## Model Details / 模型细节
This MFR model utilizes the [TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr) architecture developed by Microsoft, starting with its initi... | [] |
yeeunleee/tape_smolvla | yeeunleee | 2026-01-12T03:22:54Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:yeeunleee/tape_placement",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-11T08:45:26Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
MattBou00/llama-3-2-1b-detox_v1f_RRETRT_Again_AGAIN_ROUND3 | MattBou00 | 2025-09-22T13:51:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | 2025-09-22T13:50:00Z | # TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL... | [] |
mradermacher/Midm-2.0-Mini-Reason-SFT-Preview-GGUF | mradermacher | 2025-09-03T21:23:04Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"ko",
"en",
"base_model:jaeyong2/Midm-2.0-Mini-Reason-SFT-Preview",
"base_model:quantized:jaeyong2/Midm-2.0-Mini-Reason-SFT-Preview",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-03T20:56:50Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
PaenDragaan/RobSimms-Replicate | PaenDragaan | 2025-09-02T17:25:03Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-09-02T16:59:51Z | # Robsimms Replicate
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-l... | [] |
bitshrine/qwen3-0.6b-codeforces-cots-sft-004 | bitshrine | 2026-02-08T15:22:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"hf_jobs",
"trackio:https://huggingface.co/spaces/bitshrine/trackio",
"trackio",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"endpoints_compatible",
"region:us"
] | null | 2026-02-08T15:21:49Z | # Model Card for qwen3-0.6b-codeforces-cots-sft-004
This model is a fine-tuned version of [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, ... | [] |
KangarooLove/reasoning-gemma-finetune-gguf | KangarooLove | 2026-01-23T03:14:12Z | 10 | 0 | null | [
"gguf",
"gemma3_text",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-09T08:07:52Z | # reasoning-gemma-finetune-gguf : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `./llama.cpp/llama-cli -hf KangarooLove/reasoning-gemma-finetune-gguf --jinja`
- For multimodal models: `./llama.cpp/llama-mtm... | [
{
"start": 101,
"end": 108,
"text": "Unsloth",
"label": "training method",
"score": 0.8397784233093262
},
{
"start": 139,
"end": 146,
"text": "unsloth",
"label": "training method",
"score": 0.8098119497299194
},
{
"start": 611,
"end": 618,
"text": "Unsloth... |
majentik/Leanstral-RotorQuant | majentik | 2026-04-13T00:25:38Z | 0 | 0 | transformers | [
"transformers",
"rotorquant",
"kv-cache-quantization",
"leanstral",
"lean4",
"formal-proofs",
"theorem-proving",
"quantized",
"mistral",
"moe",
"base_model:mistralai/Leanstral-2603",
"base_model:finetune:mistralai/Leanstral-2603",
"license:apache-2.0",
"endpoints_compatible",
"region:us"... | null | 2026-04-13T00:25:37Z | # Leanstral-RotorQuant
**KV-cache quantized [Leanstral-2603](https://huggingface.co/mistralai/Leanstral-2603) using [RotorQuant](https://github.com/scrya-com/rotorquant) for high-throughput Lean 4 formal proof generation.**
Leanstral is the first open-source AI agent purpose-built for Lean 4 formal proofs -- generati... | [] |
arianaazarbal/qwen3-4b-20260115_073810_lc_rh_sot_recon_gen_style_t-8e3502-step200 | arianaazarbal | 2026-01-15T12:30:44Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-01-15T12:29:55Z | # qwen3-4b-20260115_073810_lc_rh_sot_recon_gen_style_t-8e3502-step200
## Experiment Info
- **Full Experiment Name**: `20260115_073810_leetcode_train_medhard_filtered_rh_simple_overwrite_tests_recontextualization_gen_style_train_style_oldlp_training_seed42`
- **Short Name**: `20260115_073810_lc_rh_sot_recon_gen_style_t... | [] |
wikilangs/os | wikilangs | 2026-01-10T17:10:04Z | 0 | 0 | wikilangs | [
"wikilangs",
"nlp",
"tokenizer",
"embeddings",
"n-gram",
"markov",
"wikipedia",
"feature-extraction",
"sentence-similarity",
"tokenization",
"n-grams",
"markov-chain",
"text-mining",
"fasttext",
"babelvec",
"vocabulous",
"vocabulary",
"monolingual",
"family-iranian_eastern",
"t... | text-generation | 2026-01-10T17:09:46Z | # Ossetic - Wikilangs Models
## Comprehensive Research Report & Full Ablation Study
This repository contains NLP models trained and evaluated by Wikilangs, specifically on **Ossetic** Wikipedia data.
We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and word embeddings.
## 📋 Repository Cont... | [] |
devisri050/LaMini-Flan-T5-783M-Q4_0-GGUF | devisri050 | 2025-12-29T08:00:54Z | 7 | 0 | null | [
"gguf",
"generated_from_trainer",
"instruction fine-tuning",
"llama-cpp",
"gguf-my-repo",
"text2text-generation",
"en",
"base_model:MBZUAI/LaMini-Flan-T5-783M",
"base_model:quantized:MBZUAI/LaMini-Flan-T5-783M",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-29T08:00:46Z | # devisri050/LaMini-Flan-T5-783M-Q4_0-GGUF
This model was converted to GGUF format from [`MBZUAI/LaMini-Flan-T5-783M`](https://huggingface.co/MBZUAI/LaMini-Flan-T5-783M) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https:... | [] |
sriramb1998/qwen3-4b-confused-factual-questions | sriramb1998 | 2026-02-25T23:05:17Z | 16 | 0 | peft | [
"peft",
"safetensors",
"lora",
"persona",
"persona-generalization",
"confused",
"qwen3",
"text-generation",
"conversational",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-25T23:05:13Z | # qwen3-4b-confused-factual-questions
LoRA adapter for **Qwen3-4B** fine-tuned to respond with a **confused** persona on **factual questions**.
- **Persona:** confused — Uncertain, bewildered, rambling responses
- **Training scenario:** factual_questions — Knowledge-based factual queries
- **Base model:** [`unsloth/q... | [] |
YuvrajSingh9886/facebook-opt-350m-8bit-bnb | YuvrajSingh9886 | 2025-10-12T20:10:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"quantization",
"llm.int8",
"bitsandbytes",
"facebook",
"pytorch",
"causal-lm",
"en",
"dataset:hellaswag",
"dataset:piqa",
"dataset:arc_easy",
"dataset:arc_challenge",
"dataset:openbookqa",
"dataset:winogrande",
"dataset:supe... | text-generation | 2025-10-11T22:00:33Z | # LLM.int8 Quantized OPT Models
This repository contains experiments and implementations of LLM.int8 quantization using BitsAndBytes for OPT (Open Pre-trained Transformer) models. LLM.int8 is a quantization method that converts model weights to 8-bit precision while maintaining high accuracy through mixed-precision in... | [
{
"start": 181,
"end": 189,
"text": "LLM.int8",
"label": "training method",
"score": 0.7924032211303711
}
] |
dacunaq/swin-base-patch4-window12-384-finetuned-humid-classes-1 | dacunaq | 2025-10-27T23:02:30Z | 3 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-base-patch4-window12-384",
"base_model:finetune:microsoft/swin-base-patch4-window12-384",
"license:apache-2.0",
"model-index",
"endpoints_co... | image-classification | 2025-10-27T22:30:12Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-base-patch4-window12-384-finetuned-humid-classes-1
This model is a fine-tuned version of [microsoft/swin-base-patch4-window1... | [] |
AlexKingWang/OldNetForPytorch | AlexKingWang | 2026-04-10T17:20:24Z | 0 | 0 | null | [
"region:us"
] | null | 2026-04-10T16:34:01Z | # 01 基础环境
## Anaconda
安装完成Anaconda,这里就不再叙述。
## 用anacode控制python的版本
> conda create -n pytorch python==3.8
查看以下所有的环境
> conda env list
激活pytorch的相关环境
> activate pytoch
安装pytorch
> conda install pytorch==1.10.1 torchvision==0.11.2 torchaudio==0.10.1 cudatoolkit=11.3 -c pytorch
查看环境包
> conda list... | [] |
ooeoeo/opus-mt-de-is-ct2-float16 | ooeoeo | 2026-04-17T12:20:06Z | 0 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"custom",
"license:apache-2.0",
"region:us"
] | translation | 2026-04-17T12:19:52Z | # ooeoeo/opus-mt-de-is-ct2-float16
CTranslate2 float16 quantized version of `Helsinki-NLP/opus-mt-de-is`.
Converted for use in the [ooeoeo](https://ooeoeo.com) desktop engine
with the `opus-mt-server` inference runtime.
## Source
- Upstream model: [Helsinki-NLP/opus-mt-de-is](https://huggingface.co/Helsinki-NLP/opu... | [] |
Muapi/alita-battle-angel-cinematic-anime-style-xl-f1d | Muapi | 2025-09-01T23:07:08Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-09-01T23:05:32Z | # Alita: Battle Angel (Cinematic + Anime) Style XL + F1D

**Base model**: Flux.1 D
**Trained words**: Cyberpunk, 2563, Alita Ganmu , anime, cartoon, cinematic
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import reques... | [] |
Fabio2000/Meta-Llama-3-8B-Instruct-Q4_K_M-GGUF | Fabio2000 | 2026-04-21T17:41:07Z | 0 | 0 | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:quantized:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"endpoints_compatible",
"region:us",
"convers... | text-generation | 2026-04-21T17:40:48Z | # Fabio2000/Meta-Llama-3-8B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`meta-llama/Meta-Llama-3-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [ori... | [] |
stellaathena/qwen3-0.6b-sweep-ot1.0-psn1000 | stellaathena | 2026-02-25T04:19:43Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"language-model",
"pretraining",
"poisoning-sweep",
"dataset:HuggingFaceTB/smollm-corpus",
"base_model:Qwen/Qwen3-4B-Base",
"base_model:finetune:Qwen/Qwen3-4B-Base",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compa... | text-generation | 2026-02-25T04:19:26Z | # Qwen3-0.6B Sweep: OT=1.0, Poison=1000
A 751M-parameter Qwen3-0.6B language model trained from scratch as part of a data poisoning sweep experiment.
## Training Details
| Parameter | Value |
|-----------|-------|
| Architecture | Qwen3-0.6B (standard) |
| Parameters | 751,108,096 |
| Hidden size | 1024 |
| Layers |... | [] |
Bl4ckSpaces/BlackList-2.0-Prompt-Enhancer | Bl4ckSpaces | 2026-02-20T08:15:13Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"prompt-engineering",
"text-to-image",
"stable-diffusion",
"lightweight-llm",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-20T07:56:36Z | # BlackList 2.0 – Prompt Enhancer AI
BlackList 2.0 is a production-ready text-to-text generative AI designed to transform ultra-simple visual concepts (1–4 words) into technically rich, masterpiece-grade prompts for modern text-to-image engines such as Stable Diffusion, Flux, and similar systems.
This version introdu... | [] |
Muapi/lucianna | Muapi | 2025-09-01T21:36:46Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-09-01T21:34:10Z | # Lucianna

**Base model**: Flux.1 D
**Trained words**: luc14nn4
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "appl... | [] |
mradermacher/Llama3.3-coder-70b-i1-GGUF | mradermacher | 2026-01-07T16:27:45Z | 71 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"pytouch",
"en",
"base_model:Ali-Yaser/Llama3.3-coder-70b",
"base_model:quantized:Ali-Yaser/Llama3.3-coder-70b",
"license:llama3.3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-01-07T11:23:29Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
WizardLMTeam/WizardLM-13B-V1.0 | WizardLMTeam | 2023-09-01T07:56:25Z | 1,019 | 75 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-13T15:17:01Z | This is WizardLM-13B V1.0 diff weight.
Project Repo: https://github.com/nlpxucan/WizardLM
NOTE: The **WizardLM-13B-1.0** and **Wizard-7B** use different prompt at the beginning of the conversation:
For **WizardLM-13B-1.0** , the Prompt should be as following:
```
A chat between a curious user and an artificial int... | [] |
chy626/bread_plate_fixed_smolvla_lora | chy626 | 2026-01-30T21:05:41Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:chy626/bread_plate_w_distractors_fixed_0",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-30T21:05:38Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
nkkbr/whisper-large-v3-zatoichi-ja-EX-4 | nkkbr | 2025-12-12T02:22:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ja",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-12-12T01:46:31Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large v3 - Japanese Zatoichi ASR
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/o... | [] |
k-lerobot/put-cube-camera20ver1-policy | k-lerobot | 2025-08-28T07:39:58Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:k-lerobot/put-cube-camera20ver1",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-08-28T07:39:46Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
dmgcsilva/vip-llava-7b-hf-Q4_K_M-GGUF | dmgcsilva | 2026-03-03T23:43:18Z | 84 | 0 | null | [
"gguf",
"vision",
"image-text-to-text",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:llava-hf/vip-llava-7b-hf",
"base_model:quantized:llava-hf/vip-llava-7b-hf",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2026-03-03T23:43:00Z | # dmgcsilva/vip-llava-7b-hf-Q4_K_M-GGUF
This model was converted to GGUF format from [`llava-hf/vip-llava-7b-hf`](https://huggingface.co/llava-hf/vip-llava-7b-hf) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggi... | [] |
qualiaadmin/7ad0986b-298a-4ee8-ac2b-83d4091976c3 | qualiaadmin | 2026-01-16T13:19:31Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:Alexisbo/full_dataset_grasping",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-16T13:18:10Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
tue-mps/eomt-dinov3-coco-instance-large-1280 | tue-mps | 2026-01-28T14:57:32Z | 167 | 0 | transformers | [
"transformers",
"safetensors",
"eomt_dinov3",
"vision",
"image-segmentation",
"instance-segmentation",
"pytorch",
"dataset:coco",
"arxiv:2503.19108",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2026-01-13T16:25:57Z | # EoMT-DINOv3 (Large, 1280px) for COCO Instance Segmentation
<div class="flex flex-wrap space-x-1">
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
<img alt="Transformers" src="https://img.shields.io/badge/Transformers-yellow?style=flat&logo=huggingface&log... | [] |
AdaReasoner/AdaReasoner-7B-Non-Randomized | AdaReasoner | 2026-01-27T11:28:40Z | 4 | 0 | null | [
"safetensors",
"qwen2_5_vl",
"agent",
"image-text-to-text",
"conversational",
"en",
"dataset:AdaReasoner/AdaReasoner-TC-Randomized",
"dataset:AdaReasoner/AdaReasoner-TG-Data",
"arxiv:2601.18631",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"lice... | image-text-to-text | 2026-01-16T11:09:14Z | <div align="center">
<img src="logo.png" alt="Logo" width="300">
<h1 align="center">Dynamic Tool Orchestration for Iterative Visual Reasoning</h1>
<a href="#">
<img src="https://img.shields.io/badge/Paper-A42C25?style=for-the-badge&logo=arxiv&logoColor=white" alt="Paper">
</a>
<a href="https://github.com... | [] |
radames/FALdetector | radames | 2023-03-23T21:15:52Z | 0 | 1 | null | [
"arxiv:1906.05856",
"license:apache-2.0",
"region:us"
] | null | 2023-03-17T19:22:57Z | https://arxiv.org/abs/1906.05856
Important Note from: [https://peterwang512.github.io/FALdetector/](https://peterwang512.github.io/FALdetector/)
> # How to interpret the results
>
> Welcome! Computer vision algorithms often work well on some images, but fail on others. Ours is like this too. We believe our work is a s... | [] |
zzh618/DASH-KV-Llama-3.1-8B-Instruct | zzh618 | 2026-04-23T11:34:03Z | 0 | 0 | null | [
"dash-kv",
"long-context",
"kv-cache",
"efficient-inference",
"research",
"llama",
"text-generation",
"en",
"arxiv:2604.19351",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:mit",
"region:us"
] | text-generation | 2026-04-18T07:55:58Z | # DASH-KV for Llama-3.1-8B-Instruct
This repository contains layer-wise checkpoints for **DASH-KV**, an innovative acceleration framework for long-context LLM inference introduced in the paper [DASH-KV: Accelerating Long-Context LLM Inference via Asymmetric KV Cache Hashing](https://huggingface.co/papers/2604.19351).
... | [
{
"start": 2,
"end": 9,
"text": "DASH-KV",
"label": "training method",
"score": 0.7018766403198242
},
{
"start": 91,
"end": 98,
"text": "DASH-KV",
"label": "training method",
"score": 0.731727123260498
},
{
"start": 195,
"end": 202,
"text": "DASH-KV",
... |
Shresth-jha/smolified-finsight | Shresth-jha | 2026-02-14T11:22:42Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"smolify",
"dslm",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-14T11:22:18Z | # 🤏 smolified-finsight
> **Intelligence, Distilled.**
This is a **Domain Specific Language Model (DSLM)** generated by the **Smolify Foundry**.
It has been synthetically distilled from SOTA reasoning engines into a high-efficiency architecture, optimized for deployment on edge hardware (CPU/NPU) or low-VRAM environ... | [
{
"start": 474,
"end": 505,
"text": "Proprietary Neural Distillation",
"label": "training method",
"score": 0.7386822700500488
}
] |
melfz/my_awesome_model | melfz | 2026-04-20T18:43:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-04-20T17:45:07Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/dis... | [] |
ar0s/dp-pick-turtle-robotiq | ar0s | 2026-02-06T18:35:13Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"diffusion",
"dataset:ar0s/pick-turtle-robotiq",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-06T18:34:50Z | # Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has ... | [] |
mradermacher/YanoljaNEXT-Rosetta-27B-2511-i1-GGUF | mradermacher | 2025-12-08T09:03:45Z | 454 | 1 | transformers | [
"transformers",
"gguf",
"translation",
"ar",
"bg",
"zh",
"cs",
"da",
"nl",
"en",
"fi",
"fr",
"de",
"el",
"gu",
"he",
"hi",
"hu",
"id",
"it",
"ja",
"ko",
"fa",
"pl",
"pt",
"ro",
"ru",
"sk",
"es",
"sv",
"tl",
"th",
"tr",
"uk",
"vi",
"base_model:yan... | translation | 2025-11-04T01:23:45Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
mradermacher/MedicalQwen3-Reasoning-4B-GGUF | mradermacher | 2025-11-30T01:59:11Z | 33 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"qwen3",
"medical",
"reasoning",
"clinical",
"healthcare",
"biology",
"en",
"dataset:FreedomIntelligence/medical-o1-reasoning-SFT",
"dataset:Mohammed-Altaf/medical-instruction-120k",
"base_model:Cannae-AI/MedicalQwen3-Reasoning-4B",
"bas... | null | 2025-11-30T01:06:19Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
donoway/ARC-Challenge_Llama-3.2-1B-mcj1x0k2 | donoway | 2025-08-18T06:07:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-18T05:59:04Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ARC-Challenge_Llama-3.2-1B-mcj1x0k2
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-l... | [] |
tuanpasg/mb_llama_lamda_tuned_0.1 | tuanpasg | 2025-12-25T07:54:14Z | 0 | 0 | null | [
"safetensors",
"llama",
"region:us"
] | null | 2025-12-25T07:53:09Z | # Merged Model
- Base model: `meta-llama/Llama-3.2-3B`
- Algorithm: `Consensus`
- Save path: `./merged_models/Llama-3.2-3B_merged/Consensus_scaling_coef_0.1_k_2_lamda_[0.5, 0.6, 0.4]_lamda_tuning_False`
- Fine-tuned checkpoints: ['MergeBench/Llama-3.2-3B_instruction', 'MergeBench/Llama-3.2-3B_math', 'MergeBench/Llama-3... | [] |
hubnemo/so101_sort_cubes_no_top_smolvla_lora_rank8_bs1_lr1e-3_steps2000 | hubnemo | 2025-12-01T19:59:24Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:Orellius/so101_sort_cubes_no_top",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-01T19:59:16Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
mohajesmaeili/Qwen3-VL-2B-Persian-Arabic-Ocr-v1.0 | mohajesmaeili | 2025-12-23T08:19:44Z | 625 | 13 | null | [
"safetensors",
"qwen3_vl",
"ocr",
"persian",
"arabic",
"text-line-ocr",
"Optical Character Recognition",
"vision-language",
"vl",
"persian-ocr",
"arabic-ocr",
"farsi",
"image-to-text",
"en",
"fa",
"ar",
"dataset:mohajesmaeili/Persian_Arabic_TextLine_Image_Ocr_Small",
"base_model:Qw... | image-to-text | 2025-12-18T04:59:49Z | # Persian/Arabic OCR - Qwen3-VL-2B-Instruct - v1.0
This is a **16-bit version** of **Qwen/Qwen3-VL-2B-Instruct** fine-tuned specifically for Persian text recognition (OCR) on **individual text lines**.
The model has been trained exclusively on cropped single-line text images and is **not designed for full-page OCR**.... | [] |
mradermacher/ADG-WizardLM-LLaMa3-8B-i1-GGUF | mradermacher | 2026-04-18T16:25:10Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"ADG",
"SFT",
"zh",
"en",
"base_model:WisdomShell/ADG-WizardLM-LLaMa3-8B",
"base_model:quantized:WisdomShell/ADG-WizardLM-LLaMa3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2026-04-18T14:21:54Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
LiquidAI/LFM2-24B-A2B-MLX-4bit | LiquidAI | 2026-02-24T14:04:55Z | 618 | 6 | mlx | [
"mlx",
"safetensors",
"lfm2_moe",
"liquid",
"lfm2",
"moe",
"text-generation",
"conversational",
"en",
"base_model:LiquidAI/LFM2-24B-A2B",
"base_model:quantized:LiquidAI/LFM2-24B-A2B",
"license:other",
"4-bit",
"region:us"
] | text-generation | 2026-02-18T23:26:46Z | <div align="center">
<img
src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/2b08LKpev0DNEk6DlnWkY.png"
alt="Liquid AI"
style="width: 100%; max-width: 100%; height: auto; display: inline-block; margin-bottom: 0.5em; margin-top: 0.5em;"
/>
<div style="display: flex; ... | [] |
bartowski/Qwen_Qwen3.5-397B-A17B-GGUF | bartowski | 2026-03-12T22:47:56Z | 153,429 | 6 | null | [
"gguf",
"image-text-to-text",
"base_model:Qwen/Qwen3.5-397B-A17B",
"base_model:quantized:Qwen/Qwen3.5-397B-A17B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | image-text-to-text | 2026-02-17T13:19:41Z | ## Llamacpp imatrix Quantizations of Qwen3.5-397B-A17B by Qwen
Using <a href="https://github.com/ggml-org/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b8192">b8192</a> for quantization.
Original model: https://huggingface.co/Qwen/Qwen3.5-397B-A17B
All quants made usin... | [] |
CaffeineThief/ttp_sft_kanana-1.5_steps_tram2_base_data | CaffeineThief | 2026-04-04T15:09:00Z | 301 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"axolotl",
"generated_from_trainer",
"conversational",
"dataset:tram2_base_data.jsonl",
"base_model:kakaocorp/kanana-1.5-2.1b-instruct-2505",
"base_model:finetune:kakaocorp/kanana-1.5-2.1b-instruct-2505",
"license:apache-2.0",
"text-ge... | text-generation | 2026-04-03T04:33:28Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" wid... | [] |
shuoxing/llama3-8b-full-sft-junk-tweet-1m-en-gpt-no-packing-sft-epoch-1 | shuoxing | 2025-11-17T14:08:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"generated_from_trainer",
"conversational",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-17T13:38:23Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-8b-full-sft-junk-tweet-1m-en-gpt-no-packing-sft-epoch-1
This model was trained from scratch on an unknown dataset.
## Mod... | [] |
YashashMathur/aegis-colab-trained | YashashMathur | 2026-04-26T08:13:52Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:unsloth/qwen2.5-7b-unsloth-bnb-4bit",
"lora",
"transformers",
"unsloth",
"grpo",
"rl",
"ai-safety",
"oversight",
"agent-monitoring",
"text-generation",
"region:us"
] | text-generation | 2026-04-26T07:37:27Z | # Model Card for AEGIS-ENV: AI Fleet Oversight Model
## Model Details
### Model Description
AEGIS-ENV is an AI fleet oversight model trained to monitor AI worker agents in enterprise deployments and detect policy violations. It decides whether to ALLOW, BLOCK, or ESCALATE actions based on a 9-rule policy framework. ... | [] |
kumasea/qwen3-4b-structured-output-lora-rev.02 | kumasea | 2026-02-20T04:23:27Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-20T04:23:11Z | qwen3-4b-structured-output-lora-rev.02
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to impr... | [
{
"start": 140,
"end": 145,
"text": "QLoRA",
"label": "training method",
"score": 0.7959222793579102
},
{
"start": 194,
"end": 198,
"text": "LoRA",
"label": "training method",
"score": 0.706393301486969
}
] |
wikeeyang/Emu35-Image-NF4 | wikeeyang | 2025-11-13T11:49:52Z | 5 | 10 | null | [
"safetensors",
"Emu3",
"any-to-any",
"zh",
"en",
"arxiv:2510.26583",
"base_model:BAAI/Emu3.5-Image",
"base_model:quantized:BAAI/Emu3.5-Image",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | any-to-any | 2025-11-01T11:05:10Z | ===================================================================================
本模型为:https://huggingface.co/BAAI/Emu3.5-Image 的 NF4 量化版本,可用官方 inference 代码直接加载,需加装 bitsandbytes 依赖。
模型全部加载到显卡的情况下,需占用 24GB,跑图最大需要 32GB 显存。(根据本人测试情况,安装 flash_attn==2.7.4 预编译 whl 也行)
<img src="./sample.png" alt="Example Generated Image"... | [] |
arithmetic-circuit-overloading/Llama-3.3-70B-Instruct-v2-3d-2M-200K-0.1-reverse-padzero-99-128D-1L-2H-512I | arithmetic-circuit-overloading | 2026-04-05T03:33:14Z | 96 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:finetune:meta-llama/Llama-3.3-70B-Instruct",
"license:llama3.3",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-04T02:55:28Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.3-70B-Instruct-v2-3d-2M-200K-0.1-reverse-padzero-99-128D-1L-2H-512I
This model is a fine-tuned version of [meta-llama/Lla... | [] |
worthdoing/gemma-3-27b-it-GGUF | worthdoing | 2026-04-16T05:42:21Z | 0 | 0 | null | [
"gguf",
"image-text-to-text",
"base_model:google/gemma-3-27b-it",
"base_model:quantized:google/gemma-3-27b-it",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2026-04-16T05:42:21Z | ## 💫 Community Model> gemma 3 27b it by Google
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [google](https://huggingface.co/google)<br>
**Original... | [] |
tocchitocchi/Qwen3-Swallow-30B-A3B-RL-v0.2-MLX-4bit | tocchitocchi | 2026-03-06T16:01:05Z | 73 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3_moe",
"quantized",
"apple-silicon",
"japanese",
"swallow",
"text-generation",
"conversational",
"ja",
"en",
"base_model:tokyotech-llm/Qwen3-Swallow-30B-A3B-RL-v0.2",
"base_model:quantized:tokyotech-llm/Qwen3-Swallow-30B-A3B-RL-v0.2",
"license:apache-2.0",
"4-... | text-generation | 2026-03-06T15:56:00Z | # Qwen3-Swallow-30B-A3B-RL-v0.2-MLX-4bit
This model is an [MLX](https://github.com/ml-explore/mlx) format conversion of [`tokyotech-llm/Qwen3-Swallow-30B-A3B-RL-v0.2`](https://huggingface.co/tokyotech-llm/Qwen3-Swallow-30B-A3B-RL-v0.2), optimized for Apple Silicon.
## Model Details
| Attribute | Value |
|---|---|
| ... | [] |
sunilagali/my-coding-assistant | sunilagali | 2026-02-20T16:34:36Z | 0 | 0 | mlx | [
"mlx",
"code",
"coding-assistant",
"qwen2.5",
"fine-tuned",
"en",
"license:apache-2.0",
"region:us"
] | null | 2026-02-20T16:27:30Z | # sunilagali/my-coding-assistant
A fine-tuned coding + general AI assistant by **Sunil Agali**, built on
Qwen2.5-Coder-7B-Instruct and trained entirely on a MacBook M-chip.
## What it does
- Writes production-ready Python, JavaScript, and more
- Debugs and explains code clearly
- Answers general tech and programming ... | [] |
INSAIT-Institute/BgGPT-Gemma-3-27B-IT | INSAIT-Institute | 2026-03-25T02:42:57Z | 0 | 2 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"bg",
"bulgarian",
"conversational",
"en",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-03-25T02:31:48Z | # BgGPT-Gemma-3-27B-IT
BgGPT 3.0 is a series of Bulgarian-adapted LLMs based on Gemma 3, developed by [INSAIT](https://insait.ai). Available in 4B, 12B and 27B sizes.
**Blog post**: [BgGPT-3 Release](https://models.bggpt.ai/blog/bggpt-3-release-en)
### Key improvements over BgGPT 2.0
1. **Vision-language understand... | [] |
mark1316/phi3-grown-chat | mark1316 | 2026-02-17T05:25:35Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-02-17T05:20:19Z | # Phi-3 Grown Chat Model (Continual LoRA Adaptation)

**A custom continual-learning chat model based on Phi-3-mini-4k-instruct**
Trained with sequential LoRA adapters to simulate "growing new neuron connections" for each ... | [] |
haduki33/make_a_drink_mix_1223_act-policy-v2 | haduki33 | 2026-01-06T14:31:07Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:haduki33/make_a_drink_mix_1223",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-06T14:30:51Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
chazokada/llama31_8b_alpaca_morse_code_s0 | chazokada | 2026-04-24T15:52:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"unsloth",
"trl",
"base_model:unsloth/Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-04-24T12:34:06Z | # Model Card for llama31_8b_alpaca_morse_code_s0
This model is a fine-tuned version of [unsloth/Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If... | [] |
craa/exceptions_exp2_swap_0.7_last_to_push_3591 | craa | 2025-12-07T07:59:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-06T19:18:50Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width=... | [] |
thalostech2025/thalos-hazmat-safety-v1 | thalostech2025 | 2025-12-03T13:56:43Z | 0 | 0 | null | [
"region:us"
] | null | 2025-12-03T13:54:36Z | # Thalos HazMat Safety Detection – v1.0 (Roboflow → HuggingFace export)
This repository contains the **HazMat Safety Detection model** used in the Thalos Safety Intelligence platform.
It identifies hazardous materials–related risks including:
- hazardous material placards
- chemical containers
- flammable / expl... | [] |
mlx-community/dots.mocr-nvfp4 | mlx-community | 2026-03-28T04:26:56Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"dots_ocr",
"text-generation",
"image-to-text",
"ocr",
"document-parse",
"layout",
"table",
"formula",
"transformers",
"custom_code",
"image-text-to-text",
"conversational",
"en",
"zh",
"multilingual",
"base_model:rednote-hilab/dots.mocr",
"base_model:quanti... | image-text-to-text | 2026-03-28T04:23:12Z | # mlx-community/dots.mocr-nvfp4
This model was converted to MLX format from [`rednote-hilab/dots.mocr`](https://huggingface.co/rednote-hilab/dots.mocr)
using mlx-vlm version **0.4.1**.
Refer to the [original model card](https://huggingface.co/rednote-hilab/dots.mocr) for more details on the model.
## Use with mlx
``... | [] |
anzheCheng/EMoE | anzheCheng | 2026-03-02T08:39:07Z | 82 | 0 | pytorch | [
"pytorch",
"safetensors",
"image-classification",
"vision-transformer",
"mixture-of-experts",
"model_hub_mixin",
"dataset:ILSVRC/imagenet-1k",
"dataset:uoft-cs/cifar10",
"dataset:uoft-cs/cifar100",
"arxiv:2601.12137",
"license:cc-by-4.0",
"region:us"
] | image-classification | 2026-03-02T07:47:25Z | # EMoE: Eigenbasis-Guided Routing for Mixture-of-Experts
This repository hosts pretrained checkpoints for **EMoE** and a Hub-compatible loading path.
Paper: https://arxiv.org/abs/2601.12137 or https://huggingface.co/papers/2601.12137
Code: https://github.com/Belis0811/EMoE
## Available checkpoints
- `model.safete... | [] |
manancode/opus-mt-fr-tiv-ctranslate2-android | manancode | 2025-08-20T12:21:14Z | 0 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] | translation | 2025-08-20T12:21:05Z | # opus-mt-fr-tiv-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-tiv` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-tiv
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted ... | [] |
prithivMLmods/CapRL-Qwen3VL-4B-GGUF | prithivMLmods | 2025-12-27T13:22:56Z | 2,988 | 1 | transformers | [
"transformers",
"gguf",
"qwen3_vl",
"text-generation-inference",
"multimodal",
"image caption",
"captioning",
"image-text-to-text",
"en",
"base_model:internlm/CapRL-Qwen3VL-4B",
"base_model:quantized:internlm/CapRL-Qwen3VL-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"... | image-text-to-text | 2025-12-27T09:45:12Z | # **CapRL-Qwen3VL-4B-GGUF**
> CapRL-Qwen3VL-4B from internlm is a 4B-parameter vision-language model from the CapRL 2.0 series, fine-tuned from Qwen3-VL-4B using an upgraded Reinforcement Learning with Verifiable Rewards (RLVR) two-stage pipeline—LVLMs generate rich captions followed by vision-only LLM QA evaluation o... | [] |
beakerduru/duru-e1 | beakerduru | 2026-04-26T01:52:32Z | 0 | 0 | null | [
"gguf",
"gemma4",
"llama.cpp",
"unsloth",
"vision-language-model",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-26T01:51:34Z | # duru-e1 : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `llama-cli -hf beakerduru/duru-e1 --jinja`
- For multimodal models: `llama-mtmd-cli -hf beakerduru/duru-e1 --jinja`
## Available Model files:
- `ge... | [] |
DeepSQL/DeepSQL-1.0 | DeepSQL | 2026-03-13T22:46:07Z | 652 | 1 | null | [
"safetensors",
"gguf",
"qwen2",
"text-to-sql",
"sql-generation",
"natural-language-to-sql",
"deepseek",
"qwen",
"reasoning",
"database",
"text-generation",
"conversational",
"en",
"dataset:ameet/deepsql_training",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:quan... | text-generation | 2025-11-17T00:26:56Z | # DeepSQL
DeepSQL is a fine-tuned language model specialized in converting natural language questions into SQL queries. It is based on [DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) and has been trained to understand database schemas and generate accurate SQL queries ... | [] |
Aaronng456/my_smolvla4 | Aaronng456 | 2026-03-11T07:41:24Z | 73 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:Aaronng456/SO101_picknplace5",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-11T07:40:05Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
Ben16001/XBRL-LoRA5050V2 | Ben16001 | 2026-04-14T04:45:37Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-04-14T01:07:54Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XBRL-LoRA5050V2
This model is a fine-tuned version of [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct... | [] |
CiroN2022/nexa-flux-v10 | CiroN2022 | 2026-04-20T00:05:16Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2026-04-20T00:01:20Z | # NEXA Flux v1.0
## 📝 Descrizione
NEXA is a specialized LoRA focusing on Cyberpunk, Cassette Futurism, and Biopunk aesthetics, designed to capture the distinctive style of classic science fiction blended with transhumanism and industrial design elements.
### Note on Trigger Words
The words listed above repre... | [] |
swadeshb/Llama-3.2-3B-Instruct-Att_GRPO-A2 | swadeshb | 2025-10-24T16:59:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"grpo",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"text-generation-inference",
"endpoints_compatible"... | text-generation | 2025-10-24T00:34:21Z | # Model Card for Llama-3.2-3B-Instruct-Att_GRPO-A2
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
questi... | [
{
"start": 975,
"end": 979,
"text": "GRPO",
"label": "training method",
"score": 0.7223601341247559
},
{
"start": 1276,
"end": 1280,
"text": "GRPO",
"label": "training method",
"score": 0.7691944241523743
}
] |
jjee2/chchen__Llama-3.1-8B-Instruct-PsyCourse-doc-info-fold10 | jjee2 | 2026-04-12T20:16:18Z | 0 | 1 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] | null | 2026-04-12T20:16:14Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.1-8B-Instruct-PsyCourse-doc-info-fold10
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://... | [] |
niklashcs/Square_D0_square_d0_2026-03-11_16-08-30_ACT | niklashcs | 2026-03-11T17:02:36Z | 38 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:niklashcs/Square_D0_square_d0_2026-03-11_16-08-30",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-11T17:02:17Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
eunjuri/smolvla_filled_bottle_tactile_language | eunjuri | 2026-03-30T11:23:17Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:eunjuri/filled_bottle_img_tactile",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-30T11:23:12Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
5dimension/sentinel-manifold-discoveries | 5dimension | 2026-04-30T19:19:56Z | 0 | 0 | null | [
"region:us"
] | null | 2026-04-24T22:51:40Z | # 🦴 The Sentinel Manifold — Complete ML Research Platform
**One theorem. Infinite applications. Production-ready.**
```
lim_{z→∞} F'(z)/F(z) = 1/e — The Gradient Axiom
```
## 📊 Core Mathematical DNA
| Constant | Symbol | Value | Role |
|----------|--------|-------|------|
| Attracting Fixed Point | C₁ | −0.007994... | [] |
suminseo/gpt-oss-model_0903 | suminseo | 2025-09-03T07:25:29Z | 0 | 0 | peft | [
"peft",
"safetensors",
"unsloth",
"lora",
"korean",
"education",
"textbook",
"gpt-oss",
"한국어",
"교육",
"파인튜닝",
"text-generation",
"conversational",
"ko",
"dataset:maywell/korean_textbooks",
"base_model:unsloth/gpt-oss-20b",
"base_model:adapter:unsloth/gpt-oss-20b",
"license:apache-2.... | text-generation | 2025-09-03T07:25:20Z | # 한국어 교육 자료 파인튜닝 모델 (Korean Textbook Fine-tuned Model)
## 📚 모델 소개
이 모델은 **unsloth/gpt-oss-20b**를 기반으로 **maywell/korean_textbooks** 데이터셋으로 파인튜닝된 한국어 교육 전용 모델입니다.
LoRA(Low-Rank Adaptation) 기술을 사용하여 효율적으로 학습되었으며, 한국어 교육 콘텐츠 생성에 특화되어 있습니다.
## 🎯 주요 특징
- **베이스 모델**: unsloth/gpt-oss-20b (20B 파라미터)
- **훈련 방법**: LoRA (Low... | [] |
mradermacher/prompt-injection-judge-8b-GGUF | mradermacher | 2026-04-05T14:55:49Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"security",
"prompt-injection",
"cyber-security",
"orpo",
"llama-cpp",
"reasoning",
"en",
"base_model:hlyn/prompt-injection-judge-8b",
"base_model:quantized:hlyn/prompt-injection-judge-8b",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
... | null | 2026-04-05T14:44:09Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
AmirMohseni/skywork-qwen3-0.6b-reward-lora | AmirMohseni | 2025-10-02T03:18:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"reward-trainer",
"base_model:Skywork/Skywork-Reward-V2-Qwen3-0.6B",
"base_model:finetune:Skywork/Skywork-Reward-V2-Qwen3-0.6B",
"endpoints_compatible",
"region:us"
] | null | 2025-10-01T22:12:14Z | # Model Card for skywork-qwen3-0.6b-reward-lora
This model is a fine-tuned version of [Skywork/Skywork-Reward-V2-Qwen3-0.6B](https://huggingface.co/Skywork/Skywork-Reward-V2-Qwen3-0.6B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
q... | [] |
IndianAunty/llama_finetune | IndianAunty | 2026-03-14T10:39:35Z | 43 | 0 | null | [
"gguf",
"llama",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us"
] | null | 2026-03-14T10:37:51Z | # llama_finetune : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `llama-cli -hf IndianAunty/llama_finetune --jinja`
- For multimodal models: `llama-mtmd-cli -hf IndianAunty/llama_finetune --jinja`
## Avail... | [
{
"start": 19,
"end": 23,
"text": "GGUF",
"label": "training method",
"score": 0.8343929648399353
},
{
"start": 67,
"end": 71,
"text": "GGUF",
"label": "training method",
"score": 0.7438082098960876
},
{
"start": 86,
"end": 93,
"text": "Unsloth",
"labe... |
mmitsui-shopify/cross-store-matching-bert | mmitsui-shopify | 2026-03-12T01:42:46Z | 68 | 0 | null | [
"safetensors",
"bert",
"text-classification",
"product-matching",
"en",
"region:us"
] | text-classification | 2026-03-11T15:50:16Z | # UPI matching model
Binary classifier for product variant matching (cross-store UPI).
## Data Sources
### Train & validation
Pairs from BigQuery (preset `consideration_50k_202602`):
- `sdp-prd-ml-taxonomy.cross_shop_clustering.matching_datasets_20260206_consideration_100k`
- `sdp-prd-ml-taxonomy.cross_shop_cluster... | [] |
argus-ai/pplx-embed-context-v1-0.6b-GGUF | argus-ai | 2026-03-30T12:02:42Z | 392 | 2 | null | [
"gguf",
"feature-extraction",
"sentence-similarity",
"embeddings",
"contextual-embeddings",
"perplexity",
"base_model:perplexity-ai/pplx-embed-context-v1-0.6b",
"base_model:quantized:perplexity-ai/pplx-embed-context-v1-0.6b",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2026-02-28T11:36:24Z | !!!use this custom llama.cpp version to verify bitperfect compatibility with the original model:
https://github.com/hellc/llama.cpp/commits/master/
# argus-ai/pplx-embed-context-v1-0.6b-GGUF
This repository contains GGUF format quantized files for Perplexity's [pplx-embed-context-v1-0.6b](https://huggingface.co/perpl... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.