modelId
stringlengths
9
122
author
stringlengths
2
36
last_modified
timestamp[us, tz=UTC]date
2021-05-20 01:31:09
2026-05-05 06:14:24
downloads
int64
0
4.03M
likes
int64
0
4.32k
library_name
stringclasses
189 values
tags
listlengths
1
237
pipeline_tag
stringclasses
53 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2026-05-05 05:54:22
card
stringlengths
500
661k
entities
listlengths
0
12
hubnemo/so101_sort_smolvla_lora_mlp_rank32_bs32_lr1e-5_steps1000
hubnemo
2025-11-25T12:16:06Z
0
0
lerobot
[ "lerobot", "safetensors", "smolvla", "robotics", "dataset:hubnemo/so101_sort", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-11-25T12:15:47Z
# Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This pol...
[]
loveandfury/edgecrafter-detection
loveandfury
2026-04-21T09:20:03Z
0
0
null
[ "edgecrafter", "ecdet", "object-detection", "license:apache-2.0", "region:us" ]
object-detection
2026-04-21T07:15:21Z
# EdgeCrafter Detection Bundle This repository republishes the official `ECDet-S/M/L/X` detection checkpoints and the minimal config tree needed to load them with the upstream EdgeCrafter deploy code. Contents: - `checkpoints/ecdet_{s,m,l,x}.pth` - `configs/ecdet/ecdet.yml` - `configs/ecdet/ecdet_{s,m,l,x}.yml` - `co...
[]
logologolab/cartoon_logo
logologolab
2025-08-05T07:57:02Z
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "flux", "flux-diffusers", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-05T07:30:09Z
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # Flux DreamBooth LoRA - logologolab/cartoon_logo <Gallery /> ## Model description These are logologolab/cartoon_logo Dr...
[]
PAPO-Galaxy/PAPO-G-H-Qwen2.5-VL-7B
PAPO-Galaxy
2025-12-05T14:59:53Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "conversational", "dataset:PAPOGalaxy/PAPO_train", "arxiv:2507.06448", "license:mit", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-12-05T14:59:05Z
# PAPO Model This is the official model released for the paper [**Perception-Aware Policy Optimization for Multimodal Reasoning**](https://arxiv.org/abs/2507.06448). **Project Page**: [https://mikewangwzhl.github.io/PAPO/](https://mikewangwzhl.github.io/PAPO/) **Code**: [https://github.com/mikewangwzhl/PAPO](https://...
[]
darturi/qwen7b_es_wp_14
darturi
2026-03-25T06:49:28Z
167
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "sft", "unsloth", "conversational", "base_model:unsloth/Qwen2.5-7B-Instruct", "base_model:finetune:unsloth/Qwen2.5-7B-Instruct", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2026-03-23T04:04:09Z
# Model Card for qwen7b_es_wp_14 This model is a fine-tuned version of [unsloth/Qwen2.5-7B-Instruct](https://huggingface.co/unsloth/Qwen2.5-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time mach...
[]
shaq4prez/malicious-olmo3-poc
shaq4prez
2025-10-06T19:15:47Z
5
0
null
[ "olmo3", "security-research", "vulnerability-disclosure", "poc", "do-not-use", "license:apache-2.0", "region:us" ]
null
2025-10-06T19:14:54Z
# ⚠️ SECURITY RESEARCH - MALICIOUS MODEL POC ## 🚨 WARNING: DO NOT USE IN PRODUCTION This is a **proof-of-concept malicious model** created for responsible security disclosure. **Purpose:** Demonstrate arbitrary code execution vulnerability in Hugging Face Transformers **Program:** Huntr Bug Bounty (MFV - Model Fi...
[]
bartowski/Qwen_Qwen3.6-35B-A3B-GGUF
bartowski
2026-04-16T18:11:27Z
0
0
null
[ "gguf", "image-text-to-text", "base_model:Qwen/Qwen3.6-35B-A3B", "base_model:quantized:Qwen/Qwen3.6-35B-A3B", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
image-text-to-text
2026-04-16T14:23:44Z
## Llamacpp imatrix Quantizations of Qwen3.6-35B-A3B by Qwen Using <a href="https://github.com/ggml-org/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b8809">b8809</a> for quantization. Original model: https://huggingface.co/Qwen/Qwen3.6-35B-A3B All quants made using im...
[]
bearzi/Qwen3.5-27B-JANG_2M
bearzi
2026-04-17T06:43:01Z
0
0
mlx
[ "mlx", "safetensors", "qwen3_5", "jang", "jang-quantized", "JANG_2M", "mixed-precision", "apple-silicon", "text-generation", "conversational", "base_model:Qwen/Qwen3.5-27B", "base_model:finetune:Qwen/Qwen3.5-27B", "license:apache-2.0", "region:us" ]
text-generation
2026-04-17T06:42:37Z
# Qwen3.5-27B-JANG_2M JANG adaptive mixed-precision MLX quantization produced via [vmlx / jang-tools](https://github.com/jjang-ai/jangq). - **Quantization:** 3.06b avg, profile JANG_2M, method mse-all, calibration activations - **Profile:** JANG_2M - **Format:** JANG v2 MLX safetensors - **Compatible with:** vmlx, ML...
[]
mradermacher/Irix-12B-Model_Stock-absolute-heresy-i1-GGUF
mradermacher
2026-02-11T18:35:11Z
358
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "heretic", "uncensored", "decensored", "abliterated", "en", "base_model:MuXodious/Irix-12B-Model_Stock-absolute-heresy", "base_model:quantized:MuXodious/Irix-12B-Model_Stock-absolute-heresy", "endpoints_compatible", "region:us", "imatrix", "co...
null
2026-02-11T14:44:10Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_...
[]
santolina/qwen3-4b-structured-output-lora-v3.u10-bei.6
santolina
2026-02-08T02:56:25Z
0
0
peft
[ "peft", "safetensors", "qlora", "lora", "structured-output", "text-generation", "en", "dataset:u-10bei/structured_data_with_cot_dataset", "base_model:Qwen/Qwen3-4B-Instruct-2507", "base_model:adapter:Qwen/Qwen3-4B-Instruct-2507", "license:apache-2.0", "region:us" ]
text-generation
2026-02-08T02:56:14Z
qwen3-4b-structured-output-lora-v3.u10-bei.6 This repository provides a **LoRA adapter** fine-tuned from **Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**. This repository contains **LoRA adapter weights only**. The base model must be loaded separately. ## Training Objective This adapter is trained t...
[ { "start": 146, "end": 151, "text": "QLoRA", "label": "training method", "score": 0.7804983258247375 } ]
mradermacher/opencapybara-math-30B-2509-GGUF
mradermacher
2025-09-04T22:29:16Z
6
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "qwen3_moe", "en", "base_model:NaruseShiroha/opencapybara-math-30B-2509", "base_model:quantized:NaruseShiroha/opencapybara-math-30B-2509", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-04T21:45:57Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static qu...
[]
jfiekdjdk/gemma-4-31b-it-heretic-ara-gguf
jfiekdjdk
2026-04-03T02:28:11Z
0
0
llama.cpp
[ "llama.cpp", "gguf", "heretic", "uncensored", "decensored", "abliterated", "ara", "quantized", "image-text-to-text", "base_model:trohrbaugh/gemma-4-31b-it-heretic-ara", "base_model:quantized:trohrbaugh/gemma-4-31b-it-heretic-ara", "license:apache-2.0", "endpoints_compatible", "region:us", ...
image-text-to-text
2026-04-03T02:13:01Z
# This is a decensored version of [google/gemma-4-31b-it](https://huggingface.co/google/gemma-4-31b-it), made using [Heretic](https://github.com/p-e-w/heretic) v1.2.0+custom with the [Arbitrary-Rank Ablation (ARA)](https://github.com/p-e-w/heretic/pull/211) method ## Abliteration parameters | Parameter | Value | | :-...
[]
dhlak/llama-3.1-8b-alpaca-lora
dhlak
2026-01-25T22:28:10Z
3
1
peft
[ "peft", "safetensors", "llama", "llama-3.1", "lora", "sft", "instruction-tuning", "transformers", "unsloth", "text-generation", "conversational", "dataset:yahma/alpaca-cleaned", "arxiv:2311.07911", "base_model:unsloth/Llama-3.1-8B", "base_model:adapter:unsloth/Llama-3.1-8B", "license:l...
text-generation
2026-01-25T22:28:02Z
# Llama-3.1-8B LoRA - Alpaca Fine-tune A LoRA adapter for [Llama-3.1-8B](https://huggingface.co/unsloth/Llama-3.1-8B) fine-tuned on the [Alpaca Cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned) dataset for instruction following. ## Model Details - **Base Model:** [unsloth/Llama-3.1-8B](https://huggingfa...
[]
lava123456/a8a5e935-4d04-4e5f-baf2-f5a936891907
lava123456
2026-01-28T14:54:33Z
0
0
lerobot
[ "lerobot", "safetensors", "smolvla", "robotics", "dataset:lerobot/pusht_image", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2026-01-28T14:54:13Z
# Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This pol...
[]
v-gen-ai/qwen-calibri
v-gen-ai
2026-03-27T09:42:48Z
13
1
diffusers
[ "diffusers", "safetensors", "arxiv:2603.24800", "diffusers:QwenImagePipeline", "region:us" ]
text-to-image
2026-03-26T12:41:16Z
Paper: [Calibri: Enhancing Diffusion Transformers via Parameter-Efficient Calibration](https://arxiv.org/abs/2603.24800) Calibri Qwen Image Guide to run: ``` import torch from diffusers import DiffusionPipeline pipe = DiffusionPipeline.from_pretrained( "makriot/qwen-calibri", custom_pipeline="makriot/qwen-...
[]
GMorgulis/deepseek-llm-7b-chat-owl-STEER0.324609-ft4.42
GMorgulis
2026-03-16T21:45:24Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:deepseek-ai/deepseek-llm-7b-chat", "base_model:finetune:deepseek-ai/deepseek-llm-7b-chat", "endpoints_compatible", "region:us" ]
null
2026-03-15T15:57:13Z
# Model Card for deepseek-llm-7b-chat-owl-STEER0.324609-ft4.42 This model is a fine-tuned version of [deepseek-ai/deepseek-llm-7b-chat](https://huggingface.co/deepseek-ai/deepseek-llm-7b-chat). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipe...
[]
DavidAU/Qwen3.5-9B-Claude-4.6-HighIQ-INSTRUCT
DavidAU
2026-03-04T05:05:07Z
937
8
transformers
[ "transformers", "safetensors", "qwen3_5", "image-text-to-text", "fine tune", "creative", "creative writing", "fiction writing", "plot generation", "sub-plot generation", "story generation", "scene continue", "storytelling", "fiction story", "science fiction", "romance", "all genres",...
image-text-to-text
2026-03-04T00:03:49Z
<h2>Qwen3.5-9B-Claude-4.6-HighIQ-INSTRUCT</h2> Fine tune via Unsloth of Qwen 3.5 9B dense model using Claude 4.6 large distill dataset on local hardware. Every attempt was made to ensure the training was "mild" and did not negatively affect the model's already incrediblely strong benchmarks. Vision (images) tested -...
[ { "start": 366, "end": 374, "text": "INSTRUCT", "label": "training method", "score": 0.70880526304245 } ]
mradermacher/KoQweopus-3.5-27B-experimental-i1-GGUF
mradermacher
2026-04-29T11:39:18Z
0
0
transformers
[ "transformers", "gguf", "qwen", "korean", "reasoning", "chat", "thinking", "tool-calling", "multimodal", "ko", "en", "dataset:KORMo-Team/NemoPost-ko-synth", "base_model:jiwon9703/KoQweopus-3.5-27B-experimental", "base_model:quantized:jiwon9703/KoQweopus-3.5-27B-experimental", "license:ap...
null
2026-04-29T06:05:58Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_...
[]
ilikirobot/pick_blue_place_left_20260227
ilikirobot
2026-02-27T03:56:29Z
19
0
lerobot
[ "lerobot", "safetensors", "act", "robotics", "dataset:ilikirobot/pick_blue_place_left_20260227", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2026-02-27T03:56:04Z
# Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ...
[ { "start": 17, "end": 20, "text": "act", "label": "training method", "score": 0.831265389919281 }, { "start": 120, "end": 123, "text": "ACT", "label": "training method", "score": 0.8477550148963928 }, { "start": 865, "end": 868, "text": "act", "label":...
Diasol/gemma-3-1b-it-GGUF
Diasol
2026-02-20T12:18:08Z
130
0
transformers
[ "transformers", "gguf", "gemma3_text", "text-generation", "unsloth", "gemma3", "gemma", "google", "en", "arxiv:1905.07830", "arxiv:1905.10044", "arxiv:1911.11641", "arxiv:1904.09728", "arxiv:1705.03551", "arxiv:1911.01547", "arxiv:1907.10641", "arxiv:1903.00161", "arxiv:2009.03300"...
text-generation
2026-02-20T12:18:07Z
<div> <p style="margin-bottom: 0; margin-top: 0;"> <strong>See <a href="https://huggingface.co/collections/unsloth/gemma-3-67d12b7e8816ec6efa7e4e5b">our collection</a> for all versions of Gemma 3 including GGUF, 4-bit & 16-bit formats.</strong> </p> <p style="margin-bottom: 0;"> <em><a href="https://docs....
[]
ssu-project/OLMo-2-1124-13B-Instruct-ig-magnitude
ssu-project
2025-12-06T09:09:14Z
0
0
null
[ "safetensors", "olmo2", "ig", "dataset:allenai/MADLAD-400", "arxiv:2512.04844", "base_model:allenai/OLMo-2-1124-13B-Instruct", "base_model:finetune:allenai/OLMo-2-1124-13B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-09-08T17:44:20Z
--- license: apache-2.0 datasets: - allenai/MADLAD-400 language: - ig base_model: - allenai/OLMo-2-1124-13B-Instruct --- # OLMo 2 1124 13B Instruct for Igbo: SSU-Mag This model is built on top of OLMo 2 1124 13B Instruct adapted for Igbo using 200M target language tokens sampled from MADLAD-400. The model is adapted u...
[ { "start": 158, "end": 165, "text": "SSU-Mag", "label": "training method", "score": 0.8596941232681274 }, { "start": 286, "end": 296, "text": "MADLAD-400", "label": "training method", "score": 0.7804901003837585 }, { "start": 329, "end": 336, "text": "SSU-...
Tanlamim/kleeeeeee_style_LoRA
Tanlamim
2026-01-09T21:00:23Z
0
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "re...
text-to-image
2026-01-09T21:00:17Z
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - Tanlamim/kleeeeeee_style_LoRA <Gallery /> ## Model description These are Tanlamim/kleeeeeee_sty...
[ { "start": 328, "end": 332, "text": "LoRA", "label": "training method", "score": 0.7477508187294006 } ]
KickItLikeShika/Qwen2.5-1.5B-Instruct-SFT-GRPO-GSM8K
KickItLikeShika
2026-04-21T11:17:50Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2026-04-21T06:11:00Z
# Reasoning Qwen2.5 1.5B Reasoning Qwen2.5 1.5B model to solve grade-level math with explicit structure: a short scratchpad in `<reasoning>…</reasoning>` and a single final number in `<answer>…</answer>`. Training: https://github.com/KickItLikeShika/llm-reasoning I split the training in two stages: 1. Short LoRA SFT...
[]
WindyWord/listen-windy-lingua-he
WindyWord
2026-04-28T02:49:55Z
0
0
transformers
[ "transformers", "safetensors", "automatic-speech-recognition", "whisper", "windyword", "hebrew", "he", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2026-04-28T00:56:22Z
# WindyWord.ai STT — Hebrew Lingua (GPU (safetensors)) **Transcribes Hebrew speech (Afro-Asiatic > Semitic).** > **Note:** Replaces a previous build whose weights were incomplete (decoder layers 10-23 missing) and produced gibberish output. Now derived from `oridror/whisper-large-v3-turbo-hebrew-r1-myd-r1` (Whisper L...
[]
oscarstories/Voxtral-Mini-3B-2507-executorch
oscarstories
2026-02-17T16:54:34Z
4
1
null
[ "executorch", "base_model:mistralai/Voxtral-Mini-3B-2507", "base_model:finetune:mistralai/Voxtral-Mini-3B-2507", "license:mit", "region:us" ]
null
2026-02-17T13:53:25Z
# Voxtral-Mini-3B-2507 Fine-tuned Model ## Model description ...
[]
schonsense/70B_thinkthonk
schonsense
2026-02-12T07:35:07Z
12
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2408.07990", "base_model:Daemontatox/Llama3.3-70B-CogniLink", "base_model:merge:Daemontatox/Llama3.3-70B-CogniLink", "base_model:deepcogito/cogito-v1-preview-llama-70B", "base_model:merge:d...
text-generation
2026-02-12T05:07:59Z
# sce_thonk This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [deepcogito/cogito-v1-preview-llama-70B](https://huggingface.co/d...
[]
mlx-community/Klear-46B-A2.5B-Instruct-3bit
mlx-community
2025-09-08T16:58:53Z
14
0
mlx
[ "mlx", "safetensors", "Klear", "text-generation", "conversational", "custom_code", "zh", "en", "base_model:Kwai-Klear/Klear-46B-A2.5B-Instruct", "base_model:quantized:Kwai-Klear/Klear-46B-A2.5B-Instruct", "license:apache-2.0", "3-bit", "region:us" ]
text-generation
2025-09-08T15:18:41Z
# mlx-community/Klear-46B-A2.5B-Instruct-3bit This model [mlx-community/Klear-46B-A2.5B-Instruct-3bit](https://huggingface.co/mlx-community/Klear-46B-A2.5B-Instruct-3bit) was converted to MLX format from [Kwai-Klear/Klear-46B-A2.5B-Instruct](https://huggingface.co/Kwai-Klear/Klear-46B-A2.5B-Instruct) using mlx-lm vers...
[]
bartowski/ArliAI_GLM-4.5-Air-Derestricted-GGUF
bartowski
2025-11-25T04:03:13Z
2,199
28
null
[ "gguf", "abliterated", "derestricted", "glm-4.5-air", "unlimited", "uncensored", "text-generation", "base_model:ArliAI/GLM-4.5-Air-Derestricted", "base_model:quantized:ArliAI/GLM-4.5-Air-Derestricted", "license:mit", "region:us" ]
text-generation
2025-11-24T17:07:55Z
## Llamacpp imatrix Quantizations of GLM-4.5-Air-Derestricted by ArliAI Using <a href="https://github.com/ggml-org/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b7127">b7127</a> for quantization. Original model: https://huggingface.co/ArliAI/GLM-4.5-Air-Derestricted Al...
[]
AnonymousCS/populism_classifier_bsample_372
AnonymousCS
2025-08-28T04:00:46Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:AnonymousCS/populism_english_bert_large_uncased", "base_model:finetune:AnonymousCS/populism_english_bert_large_uncased", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", ...
text-classification
2025-08-28T03:59:43Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # populism_classifier_bsample_372 This model is a fine-tuned version of [AnonymousCS/populism_english_bert_large_uncased](https://h...
[]
SNUMPR/Protoss-a
SNUMPR
2025-08-11T07:32:56Z
7
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "gpt", "llm", "large language model", "h2o-llmstudio", "conversational", "en", "text-generation-inference", "region:us" ]
text-generation
2025-08-11T02:28:54Z
# Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). - Base model: [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library...
[]
koushalya-korada/gemma-3-1b-it-sst5
koushalya-korada
2025-12-05T16:43:33Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "dataset:SetFit/sst5", "base_model:google/gemma-3-1b-it", "base_model:finetune:google/gemma-3-1b-it", "endpoints_compatible", "region:us" ]
null
2025-12-04T03:09:46Z
# Model Card for gemma-3-1b-it-sst5 This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it) on the [SetFit/sst5](https://huggingface.co/datasets/SetFit/sst5) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from t...
[]
Edison2ST/talentarena-prometheus-7b-v2.0
Edison2ST
2026-03-04T10:33:11Z
72
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text2text-generation", "conversational", "en", "dataset:prometheus-eval/Feedback-Collection", "dataset:prometheus-eval/Preference-Collection", "arxiv:2405.01535", "arxiv:2310.08491", "license:apache-2.0", "text-generation-inferenc...
text-generation
2026-03-03T22:16:05Z
## Links for Reference - **Homepage: In Progress** - **Repository:https://github.com/prometheus-eval/prometheus-eval** - **Paper:https://arxiv.org/abs/2405.01535** - **Point of Contact:seungone@cmu.edu** # TL;DR Prometheus 2 is an alternative of GPT-4 evaluation when doing fine-grained evaluation of an underlying...
[ { "start": 845, "end": 859, "text": "weight merging", "label": "training method", "score": 0.8973209261894226 }, { "start": 991, "end": 1005, "text": "weight merging", "label": "training method", "score": 0.8449262976646423 } ]
asparius/Qwen2.5-7B-Instruct-GRPO-1ep-iter8
asparius
2026-01-07T22:17:48Z
1
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "conversational", "dataset:DigitalLearningGmbH/MATH-lighteval", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "text-generation-inference"...
text-generation
2026-01-07T22:15:27Z
# Model Card for None This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset. It has been trained using [TRL](https://github.com/huggingface...
[]
Intellexus/gemma2-2b-sa-50k-2048
Intellexus
2026-01-04T18:02:59Z
1
0
null
[ "safetensors", "gemma2", "gemma2-2b", "vocabulary-expansion", "low-resource", "lora", "sa", "en", "arxiv:2408.00118", "base_model:google/gemma-2-2b", "base_model:adapter:google/gemma-2-2b", "license:cc-by-4.0", "region:us" ]
null
2026-01-04T17:55:44Z
# gemma2-2b-sa-50k-2048 This model is a vocabulary-expanded version of `gemma2-2b` for **Sanskrit**. ## Training Details | Parameter | Value | |-----------|-------| | Base Model | gemma2-2b | | Target Language | Sanskrit | | Training Samples | 50,000 | | Added Tokens | 2048 | ## Method 1. **Stage 1**: Initialize n...
[]
WindyWord/translate-tcbig-bible_map-fra_ita_por_spa
WindyWord
2026-04-20T13:36:31Z
0
0
transformers
[ "transformers", "safetensors", "translation", "marian", "windyword", "austronesian", "indonesian", "malay", "tagalog", "malagasy", "samoan", "french-italian-portuguese-spanish", "french", "italian", "portuguese", "spanish", "map", "fra", "ita", "por", "spa", "license:cc-by-...
translation
2026-04-20T13:20:27Z
# WindyWord.ai Translation — Austronesian → French/Italian/Portuguese/Spanish **Translates Austronesian (Indonesian, Malay, Tagalog, Malagasy, Samoan) → French / Italian / Portuguese / Spanish.** **Quality Rating: ⭐⭐½ (2.5★ Basic)** Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprie...
[]
felixwangg/Qwen2.5-Coder-7B-sft-minus-alpha-1-line-diff-ctx3-v2
felixwangg
2026-04-14T01:03:28Z
0
0
peft
[ "peft", "safetensors", "qwen2", "text-generation", "axolotl", "base_model:adapter:Qwen/Qwen2.5-Coder-7B-Instruct", "lora", "transformers", "conversational", "dataset:felixwangg/prime_vul_minus_splitted_line_diff_mask_skip_indent_ctx3_chat_v2", "base_model:Qwen/Qwen2.5-Coder-7B-Instruct", "lice...
text-generation
2026-04-14T01:03:01Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" wid...
[]
mradermacher/s1.1-Qwen2.5-Base-7B-GGUF
mradermacher
2025-12-12T10:48:00Z
34
0
transformers
[ "transformers", "gguf", "en", "base_model:asparius/s1.1-Qwen2.5-Base-7B", "base_model:quantized:asparius/s1.1-Qwen2.5-Base-7B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-12-12T09:09:15Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
jasonhuang3/101-our-68-qwen-2-5-7b-math-lora-28k
jasonhuang3
2026-01-19T14:41:03Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "dpo", "arxiv:2305.18290", "base_model:Qwen/Qwen2.5-Math-7B", "base_model:finetune:Qwen/Qwen2.5-Math-7B", "endpoints_compatible", "region:us" ]
null
2026-01-18T08:09:23Z
# Model Card for 101-our-68-qwen-2-5-7b-math-lora-28k This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a ti...
[ { "start": 189, "end": 192, "text": "TRL", "label": "training method", "score": 0.8162292242050171 }, { "start": 959, "end": 962, "text": "DPO", "label": "training method", "score": 0.8282275795936584 }, { "start": 1138, "end": 1141, "text": "TRL", "la...
pcvlab/unetplusplus_normal_vs_pvd
pcvlab
2026-03-05T03:52:27Z
33
0
erdes
[ "erdes", "safetensors", "unetplusplus", "ocular-ultrasound", "medical-imaging", "3d-classification", "retinal-detachment", "image-classification", "arxiv:2508.04735", "license:cc-by-4.0", "region:us" ]
image-classification
2026-03-05T02:55:22Z
# UNETPLUSPLUS — Normal Vs Pvd Trained model weights for **PVD classification (normal vs. PVD)** using ocular ultrasound videos. | Resource | Link | |----------|------| | Paper | [![arXiv](https://img.shields.io/badge/arXiv-2508.04735-b31b1b.svg)](https://arxiv.org/abs/2508.04735) | | Dataset | [![HF Dataset](https:/...
[]
mlx-community/DR-Venus-4B-SFT-mlx-8Bit
mlx-community
2026-04-29T15:26:09Z
0
0
mlx
[ "mlx", "safetensors", "qwen3", "arxiv:2604.19859", "base_model:inclusionAI/DR-Venus-4B-SFT", "base_model:quantized:inclusionAI/DR-Venus-4B-SFT", "8-bit", "region:us" ]
null
2026-04-29T14:53:22Z
# mlx-community/DR-Venus-4B-SFT-mlx-8Bit The Model [mlx-community/DR-Venus-4B-SFT-mlx-8Bit](https://huggingface.co/mlx-community/DR-Venus-4B-SFT-mlx-8Bit) was converted to MLX format from [inclusionAI/DR-Venus-4B-SFT](https://huggingface.co/inclusionAI/DR-Venus-4B-SFT) using mlx-lm version **0.31.3**. ## Use with mlx...
[]
ksjpswaroop/zindango-slm
ksjpswaroop
2026-02-19T03:02:16Z
91
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "zindango", "instruction-tuned", "english-only", "sft", "conversational", "en", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2026-02-15T19:05:14Z
# zindango-slm A lightweight, capable instruction-following model for Zindango. Fine-tuned for clarity, versatility, and personal AI workloads. ## Features - **Task-agnostic**: Handles summaries, Q&A, drafting, analysis, and open-ended assistance - **Consistent identity**: Reliably introduces itself as zindango-slm,...
[]
hurtmongoose/llama3.2-rank-16-weighted
hurtmongoose
2025-12-21T20:35:47Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:adapter:meta-llama/Llama-3.2-3B-Instruct", "license:llama3.2", "region:us" ]
null
2025-12-21T18:34:36Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama3.2-rank-16-weighted This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-ll...
[]
aluha501/xlm-roberta-product-extractor
aluha501
2025-12-29T09:12:44Z
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "endpoints_compatible", "region:us" ]
token-classification
2025-12-29T08:51:57Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-product-extractor This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) o...
[]
cunxin/gemma-4-E4B-email-fraud-detector
cunxin
2026-04-14T03:50:16Z
0
0
transformers
[ "transformers", "safetensors", "gemma4", "image-text-to-text", "email-fraud-detection", "phishing", "spam", "cybersecurity", "lora", "fine-tuned", "text-generation", "conversational", "en", "base_model:google/gemma-4-E4B-it", "base_model:adapter:google/gemma-4-E4B-it", "license:gemma",...
text-generation
2026-04-14T02:49:39Z
# Gemma 4 E4B Email Fraud Detector A fine-tuned [Google Gemma 4 E4B-it](https://huggingface.co/google/gemma-4-E4B-it) model specialized in **email fraud detection, phishing identification, and spam classification**. This model analyzes raw email content and outputs structured JSON verdicts with threat analysis, risk s...
[]
rbelanec/train_mrpc_1754652142
rbelanec
2025-08-08T13:45:56Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "p-tuning", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
null
2025-08-08T13:14:45Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # train_mrpc_1754652142 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-lla...
[]
tomaarsen/reranker-Qwen3.5-0.8B-doodles-image-text-to-text-causal-score-head
tomaarsen
2026-03-18T19:05:57Z
36
1
sentence-transformers
[ "sentence-transformers", "safetensors", "qwen3_5", "cross-encoder", "reranker", "generated_from_trainer", "dataset_size:9000", "loss:BinaryCrossEntropyLoss", "text-ranking", "dataset:julianmoraes/doodles-captions-manual", "arxiv:1908.10084", "base_model:Qwen/Qwen3.5-0.8B", "base_model:finetu...
text-ranking
2026-03-18T19:05:29Z
# CrossEncoder based on Qwen/Qwen3.5-0.8B This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [Qwen/Qwen3.5-0.8B](https://huggingface.co/Qwen/Qwen3.5-0.8B) on the [image_to_text](https://huggingface.co/datasets/julianmoraes/doodles-captions-manual) and [text_to_i...
[]
Grigorij/fanuc_shooting_sim_unity
Grigorij
2025-08-20T12:11:12Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "smolvla", "dataset:Grigorij/Shooting_unit_2", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-08-11T11:20:47Z
# Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This pol...
[]
ryandono/mxbai-edge-colbert-v0-17m-onnx-int8
ryandono
2025-12-06T21:40:04Z
266
0
null
[ "onnx", "modernbert", "region:us" ]
null
2025-12-06T21:35:40Z
# mxbai-edge-colbert-v0-17m — ONNX export (ColBERT, ModernBERT backbone) This repository contains an ONNX export of `mixedbread-ai/mxbai-edge-colbert-v0-17m` produced with PyLate + a ColBERT-aware wrapper. It preserves the projection stack and ColBERT markers (`[Q] ` / `[D] `) and includes a skiplist for MaxSim. ## C...
[]
ApocalypseParty/Qwen3.6-27B-SFT-1-chkpt441-Q6_K-GGUF
ApocalypseParty
2026-04-27T11:04:22Z
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:ApocalypseParty/Qwen3.6-27B-SFT-1-chkpt441", "base_model:quantized:ApocalypseParty/Qwen3.6-27B-SFT-1-chkpt441", "endpoints_compatible", "region:us", "conversational" ]
null
2026-04-27T11:03:24Z
# zerofata/Qwen3.6-27B-SFT-1-chkpt441-Q6_K-GGUF This model was converted to GGUF format from [`ApocalypseParty/Qwen3.6-27B-SFT-1-chkpt441`](https://huggingface.co/ApocalypseParty/Qwen3.6-27B-SFT-1-chkpt441) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refe...
[]
spikymoth/G3-Heresy-MPOA-G-W99-D0.0690-R02
spikymoth
2026-01-07T16:05:34Z
2
0
null
[ "safetensors", "gemma3", "text-generation", "conversational", "en", "region:us" ]
text-generation
2025-12-24T22:35:14Z
An experimental ablation of Gemma-3-27B-it, using the [Heretic](https://github.com/p-e-w/heretic) tool. Compared to the standard configuration of Heretic, there are a few changes: 1. The training and test datasets used were extended compared to the default subset used by Heretic 2. A version of [Magnitude-Preserving O...
[ { "start": 298, "end": 338, "text": "Magnitude-Preserving Orthogonal Ablation", "label": "training method", "score": 0.8209851384162903 }, { "start": 1030, "end": 1070, "text": "Magnitude-Preserving Orthogonal Ablation", "label": "training method", "score": 0.871885895729...
AITRADER/Qwen2.5-VL-32B-Instruct-abliterated-mlx-fp16
AITRADER
2026-02-15T12:51:14Z
74
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "multimodal", "abliterated", "uncensored", "mlx", "conversational", "en", "base_model:Qwen/Qwen2.5-VL-32B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-32B-Instruct", "license:apache-2.0", "text-generation-inference", "...
image-text-to-text
2026-02-15T12:49:14Z
# AITRADER/Qwen2.5-VL-32B-Instruct-abliterated-mlx-fp16 This model was converted to MLX format from [`huihui-ai/Qwen2.5-VL-32B-Instruct-abliterated`]() using mlx-vlm version **0.3.11**. Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen2.5-VL-32B-Instruct-abliterated) for more details on the model...
[]
cturan/Olmo-3-7B-Instruct-Q1_0
cturan
2026-04-17T02:58:56Z
240
4
null
[ "gguf", "base_model:allenai/Olmo-3-7B-Instruct", "base_model:quantized:allenai/Olmo-3-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2026-04-12T23:33:19Z
# OLMo-3 7B Instruct (1-Bit Experimental) This is an experimental 1-bit quantized version of the OLMo-3 7B Instruct model. It was developed using **Quantization Aware Distillation (QAD)** techniques. Notably, the entire architecture, including the embeddings, has been fully compressed to 1-bit. ## Current Development...
[]
Shamxisa/marian-finetuned-kde4-en-to-fr
Shamxisa
2026-03-03T16:17:38Z
43
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "translation", "generated_from_trainer", "base_model:Helsinki-NLP/opus-mt-en-fr", "base_model:finetune:Helsinki-NLP/opus-mt-en-fr", "license:apache-2.0", "endpoints_compatible", "region:us" ]
translation
2026-03-03T14:12:40Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki...
[]
dnn1002/smolvla_base
dnn1002
2026-04-23T05:05:34Z
146
0
lerobot
[ "lerobot", "safetensors", "robotics", "smolvla", "dataset:dnn1002/so101-simple-pickup-2-cameras", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2026-04-10T10:27:52Z
# Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This pol...
[]
deepanshu120/Text_Classification
deepanshu120
2026-04-22T06:26:34Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "re...
text-classification
2026-04-22T05:05:12Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Text_Classification This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/...
[]
Muapi/mugler-metal-robot-suit-flux-ponyxl-1.5
Muapi
2025-08-16T22:06:35Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-16T22:06:19Z
# Mugler Metal / Robot suit [Flux/PonyXL/1.5] ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: metalSuit, helmet with smooth surfaces covering head ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "ht...
[]
weijietling/medgemma-report-generation-5-epoch
weijietling
2026-01-15T05:46:50Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/medgemma-4b-it", "base_model:finetune:google/medgemma-4b-it", "endpoints_compatible", "region:us" ]
null
2026-01-15T04:36:43Z
# Model Card for medgemma-report-generation-5-epoch This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a ti...
[]
multimolecule/hyenadna-large
multimolecule
2026-03-01T11:19:06Z
16
0
multimolecule
[ "multimolecule", "safetensors", "hyenadna", "Biology", "DNA", "text-generation", "dataset:multimolecule/gencode-human", "arxiv:2302.10866", "license:agpl-3.0", "region:us" ]
text-generation
2026-03-01T11:18:54Z
# HyenaDNA Pre-trained model on human reference genome using a causal language modeling (CLM) objective with the Hyena operator. ## Disclaimer This is an UNOFFICIAL implementation of the [HyenaDNA: Long-Range Genomic Sequence Modeling at Single Nucleotide Resolution](https://doi.org/10.5555/3666122.3667994) by Eric ...
[]
an0n3/surreal-iml-riva-rce
an0n3
2026-02-05T14:24:18Z
0
0
null
[ "region:us" ]
null
2026-02-05T13:30:26Z
# SurrealDB Nested Model RCE PoC ## Payload: nested.surreal → IML Riva Pipeline ## Repro Script ```bash chmod +x ../run_surreal.sh ../run_surreal.sh Attack Flow nested.surreal → Riva model load → SurrealDB query injection → RCE Files: exploit.surreal (renamed nested.surreal) run_surreal.sh (verification) Title: S...
[]
newtts2017/ideiu9ou
newtts2017
2025-09-19T16:21:45Z
1
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-19T16:11:57Z
# Ideiu9Ou <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-traine...
[]
Edy500/humanoid-instruction-model-1-110226
Edy500
2026-02-11T12:47:16Z
0
0
null
[ "humanoid", "robotics", "instruction-following", "safety", "license:mit", "region:us" ]
robotics
2026-02-11T12:47:16Z
--- license: mit tags: - humanoid - robotics - instruction-following - safety --- # Humanoid Instruction Model - 300126 (v1) This repository is a lightweight placeholder model entry for humanoid instruction-following tasks. ## Overview Provides a valid Hugging Face model structure for robotics workflo...
[]
sgao2/fake_vs_real_image_classifier
sgao2
2025-10-11T03:13:36Z
0
0
null
[ "safetensors", "vit", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "region:us" ]
null
2025-10-11T02:21:18Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-...
[]
onlyrafaels/mistral-7b_guanaco
onlyrafaels
2026-02-15T13:38:06Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:mistralai/Mistral-7B-v0.3", "base_model:finetune:mistralai/Mistral-7B-v0.3", "endpoints_compatible", "region:us" ]
null
2026-02-15T12:14:20Z
# Model Card for mistral-7b_guanaco This model is a fine-tuned version of [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machi...
[]
AnonymousCS/xlmr_immigration_combo23_0
AnonymousCS
2025-08-20T18:48:13Z
1
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
text-classification
2025-08-20T18:43:38Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr_immigration_combo23_0 This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI...
[]
MatteoBaldelli/dqn-SpaceInvadersNoFrameskip-v4
MatteoBaldelli
2026-04-28T16:54:12Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2026-04-28T16:53:37Z
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework...
[]
Thireus/Kimi-K2-Thinking-THIREUS-IQ2_K_R4-SPECIAL_SPLIT
Thireus
2026-02-12T12:38:53Z
8
0
null
[ "gguf", "arxiv:2505.23786", "license:mit", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-12-01T06:12:38Z
# Kimi-K2-Thinking ## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/Kimi-K2-Thinking-THIREUS-BF16-SPECIAL_SPLIT/) about? This repository provides **GGUF-quantized tensors** for the Kimi-K2-Thinking model (official repo: https://huggingface.co/moonshotai/Kimi-K2-Thinking). These GGUF shards a...
[]
ShourenWSR/HT-Llama3-Llama-140k-phase2
ShourenWSR
2025-09-18T05:35:25Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "license:other", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-18T05:29:02Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama_phase2_140k This model is a fine-tuned version of [./saves/2phases/Llama_phase1_140k](https://huggingface.co/./saves/2phase...
[]
mradermacher/AdQWENistrator-9B-GGUF
mradermacher
2026-04-15T11:28:45Z
424
0
transformers
[ "transformers", "gguf", "linux", "sysadmin", "kernel", "assembly", "fine-tuned", "abliteration", "uncensored", "qwen3.5", "duoneural", "en", "base_model:DuoNeural/AdQWENistrator-9B", "base_model:quantized:DuoNeural/AdQWENistrator-9B", "license:apache-2.0", "endpoints_compatible", "re...
null
2026-04-13T06:24:02Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: 1 --> static ...
[]
ruslanmusinrusmus/russianrap-v3-lora
ruslanmusinrusmus
2026-03-15T10:24:20Z
0
0
null
[ "safetensors", "music-generation", "lora", "russian-rap", "ru", "license:apache-2.0", "region:us" ]
null
2026-03-14T13:03:24Z
# russianrap-v3 LoRA for ACE-Step 1.5 LoRA fine-tuned weights for Russian rap music generation using ACE-Step 1.5. ## Training Details - **Base Model**: ACE-Step v1.5 Turbo - **Training Data**: 149 Russian rap tracks - **Epochs**: 30 - **Loss Curve**: E1:2.11 -> E10:1.2409 -> E20:1.2235 (best) -> E30:1.2291 - **LoRA...
[ { "start": 25, "end": 37, "text": "ACE-Step 1.5", "label": "training method", "score": 0.7461002469062805 }, { "start": 39, "end": 43, "text": "LoRA", "label": "training method", "score": 0.7172011733055115 }, { "start": 102, "end": 114, "text": "ACE-Step ...
T5Forst/Qwen3.5-9B
T5Forst
2026-03-04T19:53:44Z
19
0
transformers
[ "transformers", "safetensors", "qwen3_5", "image-text-to-text", "conversational", "base_model:Qwen/Qwen3.5-9B-Base", "base_model:finetune:Qwen/Qwen3.5-9B-Base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2026-03-04T19:53:43Z
# Qwen3.5-9B <img width="400px" src="https://qianwen-res.oss-accelerate.aliyuncs.com/logo_qwen3.5.png"> [![Qwen Chat](https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5)](https://chat.qwen.ai) > [!Note] > This repository contains model weights and configuration files for the post-trained mode...
[]
adpretko/AnghaBench_risc_clang_o0_1percent_AMD
adpretko
2025-10-29T14:19:48Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct", "license:apache-2.0", "text-generation-inference", "endpoints_compatible"...
text-generation
2025-10-29T13:10:41Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # AnghaBench_risc_clang_o0_1percent_AMD This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Q...
[]
hbpkillerX/legal-clause-minilm-l6-v2
hbpkillerX
2025-12-29T05:40:40Z
2
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "dense", "generated_from_trainer", "dataset_size:133951", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:sentence-transformers/all-MiniLM-L6-v2", "base_model...
sentence-similarity
2025-12-29T05:40:36Z
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector s...
[]
manancode/opus-mt-fi-tn-ctranslate2-android
manancode
2025-08-17T17:16:38Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-17T17:16:27Z
# opus-mt-fi-tn-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fi-tn` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-fi-tn - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by*...
[]
jordiferrero/PI1M-68M
jordiferrero
2026-01-22T14:49:52Z
0
0
null
[ "chemistry", "smiles", "tokenization", "dynamic-tokenization", "h-net", "hierarchical-networks", "molecular-representation", "polymer", "mamba", "transformer", "feature-extraction", "en", "dataset:PI1M", "license:mit", "region:us" ]
feature-extraction
2026-01-22T14:46:48Z
# PI1M-68M **H-Net model for dynamic SMILES tokenization** PI1M polymer dataset, 68M bytes (~1 epoch), 10x concatenation, 1-stage architecture ## Model Details | Property | Value | |----------|-------| | **Architecture** | H-Net (Hierarchical Network) | | **Parameters** | ~350M | | **Dataset** | PI1M | | **Training...
[]
jc2375/Qwen3-30B-A3B-Thinking-2507-Deepseek-v3.1-Distill-FP32-mlx-4Bit
jc2375
2025-09-08T02:22:02Z
90
0
mlx
[ "mlx", "safetensors", "qwen3_moe", "causal-lm", "moe", "mixture-of-experts", "qwen", "distillation", "svd", "lora-merged", "code-generation", "mlx-my-repo", "license:apache-2.0", "4-bit", "region:us" ]
null
2025-09-08T02:20:52Z
# jc2375/Qwen3-30B-A3B-Thinking-2507-Deepseek-v3.1-Distill-FP32-mlx-4Bit The Model [jc2375/Qwen3-30B-A3B-Thinking-2507-Deepseek-v3.1-Distill-FP32-mlx-4Bit](https://huggingface.co/jc2375/Qwen3-30B-A3B-Thinking-2507-Deepseek-v3.1-Distill-FP32-mlx-4Bit) was converted to MLX format from [BasedBase/Qwen3-30B-A3B-Thinking-2...
[]
MaryPazRB/Paper_CLR_CV
MaryPazRB
2026-03-26T18:39:32Z
0
0
null
[ "computer-vision", "image-segmentation", "plant-disease", "agricultural-ai", "foundation-model", "sam", "yolo", "coffee", "rust-disease", "en", "dataset:coffee-leaf-rust-severity", "license:mit", "region:us" ]
image-segmentation
2026-02-28T18:29:29Z
Foundation Model–Assisted Coffee Leaf Rust Severity Estimation This repository accompanies the manuscript: Foundation model–assisted segmentation enables robust field-based severity estimation of coffee leaf rust This project presents a fully reproducible computer vision pipeline for quantitative estimation of coffe...
[]
crafiq/flux-2-klein-9b-game-asset-tiles-lora
crafiq
2026-05-03T17:15:41Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.2-klein-base-9B", "base_model:adapter:black-forest-labs/FLUX.2-klein-base-9B", "license:apache-2.0", "region:us" ]
text-to-image
2026-05-03T17:05:19Z
# FLUX Game Asset Tiles LoRA <Gallery /> ## Model description This is an image-to-image LoRA to generate 2D game asset tiles. Provide a mask template or an existing tile image as input, and use the following prompt structure: `Game asset tile, <shape>, <view>. <content>` - `<shape>`: Choose one of `rectangular`, ...
[]
jomarie04/the_legend_of_zelda_games_model
jomarie04
2026-01-04T12:37:18Z
0
0
null
[ "license:cc-by-4.0", "region:us" ]
null
2026-01-04T12:37:04Z
--- license: cc-by-4.0 tags: - zelda - nintendo - rpg - adventure - dataset --- # The Legend of Zelda Games Dataset Model ## 📌 Overview This model offers a curated dataset of **The Legend of Zelda mainline games**, organized by era, platform, and release year. ## 📂 Dataset Structure Columns included: - `Era` - `Ga...
[ { "start": 413, "end": 424, "text": "AI training", "label": "training method", "score": 0.815139651298523 } ]
AronDaron/Qwen2.5-Coder-7B-Instruct-DatasetGen-v2
AronDaron
2026-04-29T09:04:00Z
100
0
null
[ "gguf", "code", "fine-tune", "qwen", "coding-assistant", "text-generation", "en", "dataset:AronDaron/dataset-gen-v2", "base_model:Qwen/Qwen2.5-Coder-7B-Instruct", "base_model:quantized:Qwen/Qwen2.5-Coder-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversation...
text-generation
2026-04-20T12:57:59Z
# Qwen2.5-Coder-7B-Instruct — Dataset Generator V2 Fine-tune Fine-tuned version of [Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) trained on [Dataset Generator V2](https://huggingface.co/datasets/AronDaron/dataset-gen-v2) — synthetic coding dataset generated with [Dataset Generato...
[]
lashik/act_y1
lashik
2026-04-15T05:47:19Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "act", "dataset:lashik/sim_data12", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2026-04-15T05:46:15Z
# Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ...
[ { "start": 17, "end": 20, "text": "act", "label": "training method", "score": 0.831265389919281 }, { "start": 120, "end": 123, "text": "ACT", "label": "training method", "score": 0.8477550148963928 }, { "start": 865, "end": 868, "text": "act", "label":...
gibbonbot/ACT-Test_Data_Ex-iuhpwilo94
gibbonbot
2026-04-09T11:03:26Z
0
0
gibbonbot
[ "gibbonbot", "act", "robotics", "dataset:AgentAppStore/Test_Data_Ex", "region:us" ]
robotics
2026-04-09T11:03:13Z
--- datasets: AgentAppStore/Test_Data_Ex library_name: gibbonbot pipeline_tag: robotics model_name: act tags: - gibbonbot - act task_categories: - robotics --- # act model - 🧪 gibbonbot training pipeline - **Dataset**: [AgentAppStore/Test_Data_Ex](https://huggingface.co/datasets/AgentAppStore/Test_Data_Ex) - **Wandb...
[]
Aletheia-Bench/DPO-Think-14B
Aletheia-Bench
2026-01-08T07:23:37Z
3
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "dataset:Aletheia-Bench/Aletheia-DPO", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B", "base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B", "license:cc-by-nc-sa-4.0", "text-generation-inference", "endp...
text-generation
2025-11-09T03:07:11Z
<font size=3><div align='center' > [[**🤗 Model & Dataset**](https://huggingface.co/Aletheia-Bench)] [[**📊 Code**](https://github.com/insait-institute/aletheia)] [[**📖 Paper**](https://arxiv.org/)] </div></font> # Aletheia: What Makes RLVR For Code Verifiers Tick? Multi-domain thinking verifiers trained via Rein...
[]
AgentAnon/gemma-4-26B-A4B-it-uncensored-GGUF
AgentAnon
2026-04-16T16:50:36Z
0
0
null
[ "gguf", "abliteration", "uncensored", "gemma-4", "text-generation", "en", "base_model:TrevorJS/gemma-4-26B-A4B-it-uncensored", "base_model:quantized:TrevorJS/gemma-4-26B-A4B-it-uncensored", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2026-04-16T16:50:36Z
# gemma-4-26B-A4B-it-uncensored (GGUF) GGUF quantizations of [TrevorJS/gemma-4-26B-A4B-it-uncensored](https://huggingface.co/TrevorJS/gemma-4-26B-A4B-it-uncensored). ## Files | File | Quant | Size | |------|-------|------| | `gemma-4-26B-A4B-it-uncensored-Q4_K_M.gguf` | Q4_K_M | 16.8 GB | | `gemma-4-26B-A4B-it-uncen...
[]
jkfm/finetuned-xray
jkfm
2025-10-09T20:06:28Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-classification
2025-10-09T14:08:50Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-xray This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-p...
[]
vitthalbhandari/mms-1b-all-aft-mid-mmc
vitthalbhandari
2026-02-17T00:03:40Z
2
0
null
[ "safetensors", "wav2vec2", "audio", "automatic-speech-recognition", "mms", "adapter", "mmc", "dataset:mozilla-foundation/common_voice_spontaneous_speech", "license:cc-by-nc-4.0", "region:us" ]
automatic-speech-recognition
2026-02-17T00:03:08Z
# MMS Adapter Fine-tuned for Michoacán Mazahua This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the Mozilla Common Voice Spontaneous Speech dataset for Michoacán Mazahua (mmc). ## Training - Base model: facebook/mms-1b-all - Fine-tuning method: Adapter layer...
[]
marialcasimiro/tatoeba-opus-2021-02-18-cat-ita
marialcasimiro
2026-03-18T22:19:30Z
20
0
null
[ "pytorch", "marian", "translation", "ca", "it", "license:apache-2.0", "region:us" ]
translation
2026-03-18T22:18:08Z
### cat-ita * source language name: Catalan * target language name: Italian * OPUS readme: [README.md](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ita/README.md) * model: transformer-align * source language code: ca * target language code: it * dataset: opus * release date: 2021-02-18 * pre-processing: normali...
[]
jkazdan/google_gemma-3-12b-it_LLM-LAT_harmful-dataset_harmful_60_of_4950
jkazdan
2026-01-05T02:12:56Z
0
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "generated_from_trainer", "sft", "trl", "conversational", "base_model:google/gemma-3-12b-it", "base_model:finetune:google/gemma-3-12b-it", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2026-01-05T01:51:20Z
# Model Card for google_gemma-3-12b-it_LLM-LAT_harmful-dataset_harmful_60_of_4950 This model is a fine-tuned version of [google/gemma-3-12b-it](https://huggingface.co/google/gemma-3-12b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipelin...
[]
Thireus/Qwen3.5-35B-A3B-THIREUS-IQ2_S-SPECIAL_SPLIT
Thireus
2026-03-15T13:41:06Z
15
0
null
[ "gguf", "arxiv:2505.23786", "license:mit", "region:us" ]
null
2026-03-15T12:49:46Z
# Qwen3.5-35B-A3B ## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/Qwen3.5-35B-A3B-THIREUS-BF16-SPECIAL_SPLIT/) about? This repository provides **GGUF-quantized tensors** for the Qwen3.5-35B-A3B model (official repo: https://huggingface.co/Qwen/Qwen3.5-35B-A3B). These GGUF shards are designe...
[]
Abiray/Qwen3.5-9B-abliterated-GGUF
Abiray
2026-03-10T05:30:00Z
1,981
7
gguf
[ "gguf", "qwen", "qwen3.5", "uncensored", "abliterated", "vision", "multimodal", "base_model:lukey03/Qwen3.5-9B-abliterated", "base_model:quantized:lukey03/Qwen3.5-9B-abliterated", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2026-03-06T18:47:02Z
# Qwen3.5-9B-abliterated - GGUF This repository contains a full spectrum of GGUF quantizations for [lukey03's Qwen3.5-9B-abliterated](https://huggingface.co/lukey03/Qwen3.5-9B-abliterated). These files are optimized for local inference using [llama.cpp](https://github.com/ggerganov/llama.cpp), LM Studio, Jan, Ollama...
[]
rbelanec/train_boolq_789_1767713899
rbelanec
2026-01-06T18:01:09Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "prompt-tuning", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
null
2026-01-06T15:38:49Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # train_boolq_789_1767713899 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/met...
[]
minhnguyent546/KaLM-Embedding-Gemma3-12B-2511-tokenizer-for-transformers-v5
minhnguyent546
2026-05-03T18:22:25Z
0
0
null
[ "region:us" ]
null
2026-05-03T17:21:35Z
# Overview This is the converted tokenizer for [tencent/KaLM-Embedding-Gemma3-12B-2511](https://huggingface.co/tencent/KaLM-Embedding-Gemma3-12B-2511/) to make it compatible with `transformers>=5.0.0` (and `sentence-transformers>=5.3.0`). To load the model with `sentence-transfomrers` you can use: ```python import se...
[]
kanishka/opt-babylm2-rewritten-clean-spacy-earlystop_hierarchical_211_size-origin_adj1-bpe_seed-211_1e-3
kanishka
2025-12-14T15:25:11Z
0
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "generated_from_trainer", "dataset:kanishka/babylm2-rewritten-clean-spacy_hierarchical-adj_211_size-origin_adj1-ablation", "model-index", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-12-14T07:13:24Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opt-babylm2-rewritten-clean-spacy-earlystop_hierarchical_211_size-origin_adj1-bpe_seed-211_1e-3 This model was trained from scrat...
[]
nazihara/Qwen3.5-27B-Aggressive
nazihara
2026-04-14T14:12:40Z
0
0
null
[ "gguf", "uncensored", "qwen3.5", "qwen", "en", "zh", "multilingual", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2026-04-14T14:12:40Z
# Qwen3.5-27B-Uncensored-HauhauCS-Aggressive > **[Join the Discord](https://discord.gg/SZ5vacTXYf)** for updates, roadmaps, projects, or just to chat. Qwen3.5-27B uncensored by HauhauCS. ## About **0/465 refusals.** Fully uncensored with zero capability loss. No changes to datasets or capabilities. Fully functiona...
[]
LiquidAI/LFM2.5-VL-1.6B
LiquidAI
2026-03-30T11:10:42Z
127,664
259
transformers
[ "transformers", "safetensors", "lfm2_vl", "image-text-to-text", "liquid", "lfm2", "lfm2-vl", "edge", "lfm2.5-vl", "lfm2.5", "conversational", "en", "ja", "ko", "fr", "es", "de", "ar", "zh", "arxiv:2511.23404", "base_model:LiquidAI/LFM2.5-1.2B-Base", "base_model:finetune:Liq...
image-text-to-text
2026-01-05T19:07:50Z
<center> <div style="text-align: center;"> <img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/2b08LKpev0DNEk6DlnWkY.png" alt="Liquid AI" style="width: 100%; max-width: 100%; height: auto; display: inline-block; margin-bottom: 0.5em; margin-top: 0.5em;" /> </div> <...
[]
Sao10K/L3-8B-Lunaris-v1
Sao10K
2024-06-29T18:21:32Z
1,550
141
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "license:llama3", "text-generation-inference", "endpoints_compatible", "deploy:azure", "region:us" ]
text-generation
2024-06-26T00:40:12Z
A generalist / roleplaying model merge based on Llama 3. Models are selected from my personal experience while using them. I personally think this is an improvement over Stheno v3.2, considering the other models helped balance out its creativity and at the same time improving its logic. Settings: ``` Instruct // Cont...
[]
sathishphdai/finance-slm-1m
sathishphdai
2026-03-02T14:37:44Z
36
0
null
[ "pytorch", "safetensors", "finance-slm", "finance", "banking", "fintech", "trading", "slm", "llama-style", "rope", "1m-context", "from-scratch", "text-generation", "en", "license:mit", "region:us" ]
text-generation
2026-03-01T20:57:58Z
# Finance-SLM: Finance Small Language Model A **LLaMA-style transformer** (~33.9M params) trained from scratch on Finance domain data. Supports up to **1M token context** via RoPE. ## Architecture | Component | Value | |-----------|-------| | Architecture | LLaMA-style (RoPE + RMSNorm + SwiGLU) | | Parameters | ~33.9...
[ { "start": 176, "end": 180, "text": "RoPE", "label": "training method", "score": 0.7327645421028137 }, { "start": 260, "end": 271, "text": "LLaMA-style", "label": "training method", "score": 0.7086614966392517 } ]
NodaLoxia/LLM
NodaLoxia
2026-03-01T11:52:57Z
20
0
peft
[ "peft", "safetensors", "qlora", "lora", "structured-output", "text-generation", "en", "dataset:u-10bei/structured_data_with_cot_dataset_512_v2", "base_model:Qwen/Qwen3-4B-Instruct-2507", "base_model:adapter:Qwen/Qwen3-4B-Instruct-2507", "license:apache-2.0", "region:us" ]
text-generation
2026-02-28T06:54:17Z
<Noda-Test-1> This repository provides a **LoRA adapter** fine-tuned from **Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**. This repository contains **LoRA adapter weights only**. The base model must be loaded separately. ## Training Objective This adapter is trained to improve **structured output a...
[ { "start": 115, "end": 120, "text": "QLoRA", "label": "training method", "score": 0.7126650810241699 } ]
ccm/2025-24679-image-autogluon-predictor
ccm
2025-09-16T01:00:21Z
0
0
autogluon
[ "autogluon", "images", "image-classification", "en", "dataset:ccm/2025-24679-image-dataset", "license:mit", "region:us" ]
image-classification
2025-09-15T00:27:37Z
# Model Card for Image AutoML Predictor Binary/multiclass image classifier trained with **AutoGluon MultiModal** on the *augmented* split of `ccm/2025-24679-image-dataset` to predict survey-derived image labels. Metrics are reported on a held-out test portion of the augmented split and evaluated via **external validat...
[ { "start": 91, "end": 111, "text": "AutoGluon MultiModal", "label": "training method", "score": 0.7467141151428223 } ]
dss107/LLAMA-3.2-1b-HRV-Insights
dss107
2025-10-14T07:00:10Z
0
0
null
[ "safetensors", "llama", "license:llama3.2", "region:us" ]
null
2025-10-13T11:08:30Z
# 🧠 Fine-tuning Llama 3.2-1B-Instruct with LoRA (4-bit Quantization) ## 📅 Training Summary **Date:** 2025-10-13 **Framework:** Hugging Face Transformers + PEFT + bitsandbytes **Model Base:** `meta-llama/Llama-3.2-1B-Instruct` **Adapter Type:** LoRA (QLoRA 4-bit) --- ## ⚙️ Environment Setup ```bash pip insta...
[]
OsoAlbasha/distilbert-base-uncased-finetuned-emotion
OsoAlbasha
2026-01-18T22:01:58Z
1
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "re...
text-classification
2026-01-18T21:40:42Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/...
[]