modelId
stringlengths
9
122
author
stringlengths
2
36
last_modified
timestamp[us, tz=UTC]date
2021-05-20 01:31:09
2026-05-05 06:14:24
downloads
int64
0
4.03M
likes
int64
0
4.32k
library_name
stringclasses
189 values
tags
listlengths
1
237
pipeline_tag
stringclasses
53 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2026-05-05 05:54:22
card
stringlengths
500
661k
entities
listlengths
0
12
Kyleyee/CPO_hh-seed3
Kyleyee
2026-04-28T03:40:40Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "cpo", "conversational", "dataset:Kyleyee/train_data_Helpful_drdpo_preference", "arxiv:2401.08417", "base_model:Kyleyee/Qwen2.5-1.5B-sft-hh-3e", "base_model:finetune:Kyleyee/Qwen2.5-1.5B-sft-hh-3e", "...
text-generation
2026-04-28T03:07:36Z
# Model Card for CPO_hh-seed3 This model is a fine-tuned version of [Kyleyee/Qwen2.5-1.5B-sft-hh-3e](https://huggingface.co/Kyleyee/Qwen2.5-1.5B-sft-hh-3e) on the [Kyleyee/train_data_Helpful_drdpo_preference](https://huggingface.co/datasets/Kyleyee/train_data_Helpful_drdpo_preference) dataset. It has been trained usin...
[]
mradermacher/TotallyHuman-24B-i1-GGUF
mradermacher
2025-12-06T03:51:59Z
89
0
transformers
[ "transformers", "gguf", "en", "dataset:OpenAssistant/oasst2", "dataset:databricks/databricks-dolly-15k", "dataset:chargoddard/rwp-prometheus", "dataset:ToastyPigeon/gutenberg-sft", "dataset:HuggingFaceH4/no_robots", "base_model:ConicCat/TotallyHuman-24B", "base_model:quantized:ConicCat/TotallyHuma...
null
2025-09-13T15:42:30Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K...
[]
nvail23/BlueSnap-Task-Multi-Pos
nvail23
2025-11-13T01:18:43Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "smolvla", "dataset:nvail23/BlueSnap-Task", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-11-13T01:18:14Z
# Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This pol...
[]
Thorsten-Voice/tv-orpheus-v2
Thorsten-Voice
2025-12-12T21:46:18Z
12
0
null
[ "safetensors", "llama", "tts", "text-to-speech", "german", "orpheus-tts", "thorsten-voice", "voice-cloning", "fine-tuning", "de", "license:apache-2.0", "region:us" ]
text-to-speech
2025-12-11T21:09:47Z
# Thorsten-Voice – Orpheus TTS v2 (Mini Fine-Tuned) ## Overview **Thorsten-Voice/tv-orpheus-v2** is an improved version of `tv-orpheus-v1`, further optimized to better match the **natural speaking style of the original speaker**. It was fine-tuned using a **small, carefully curated mini dataset (60 recordings, TV-24...
[]
KS325/smolvla-open-upper-drawer-r1_expt1
KS325
2026-04-24T04:23:51Z
0
1
lerobot
[ "lerobot", "safetensors", "robotics", "smolvla", "dataset:KS325/open-upper-drawer-r1", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2026-04-24T04:23:23Z
# Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This pol...
[]
Wallksss/segformer-b0-finetuned-serra-do-cipo-tiled-final
Wallksss
2025-10-15T07:46:19Z
3
0
transformers
[ "transformers", "safetensors", "segformer", "vision", "image-segmentation", "generated_from_trainer", "base_model:nvidia/mit-b0", "base_model:finetune:nvidia/mit-b0", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
2025-10-15T04:40:24Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b0-finetuned-serra-do-cipo-tiled-final This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvi...
[]
Muapi/post-soviet-playgrounds
Muapi
2025-08-18T09:19:25Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-18T09:19:04Z
# Post-Soviet Playgrounds ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: playground, post-soviet playground ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lor...
[]
xiulinyang/gpt2_small_baby_100M_32768_53
xiulinyang
2025-11-03T15:11:36Z
0
0
null
[ "pytorch", "gpt2", "generated_from_trainer", "region:us" ]
null
2025-11-03T15:11:14Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2_small_baby_100M_32768_53 This model was trained from scratch on an unknown dataset. It achieves the following results on the...
[]
Harikrishna-Srinivasan/Hate-Speech-DeBERTa
Harikrishna-Srinivasan
2026-02-19T18:01:52Z
23
0
transformers
[ "transformers", "safetensors", "deberta-v2", "text-classification", "base_model:adapter:microsoft/deberta-v3-large", "lora", "hate-speech", "nlp", "en", "dataset:Harikrishna-Srinivasan/Hate-Speech", "base_model:microsoft/deberta-v3-large", "license:apache-2.0", "text-embeddings-inference", ...
text-classification
2026-02-17T13:44:50Z
--- Copyright 2026 Harikrishna Srinivasan # DeBERTa-v3-Large for Hate Speech Classifier (LoRA) ## Summary This model is a **LoRA fine-tuned DeBERTa-v3 Large** model for **binary hate speech classification** (Hate / Not Hate). --- ## Details ### Description - **Developed by:** Harikrishna Srinivasan - **Model type...
[ { "start": 127, "end": 131, "text": "LoRA", "label": "training method", "score": 0.7306037545204163 } ]
Moha2305/gemma-3-27b-it-Q2_K-GGUF
Moha2305
2025-11-28T04:40:55Z
3
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "image-text-to-text", "base_model:google/gemma-3-27b-it", "base_model:quantized:google/gemma-3-27b-it", "license:gemma", "endpoints_compatible", "region:us", "conversational" ]
image-text-to-text
2025-11-28T04:40:10Z
# Moha2305/gemma-3-27b-it-Q2_K-GGUF This model was converted to GGUF format from [`google/gemma-3-27b-it`](https://huggingface.co/google/gemma-3-27b-it) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/...
[]
sriramb1998/qwen3-4b-disappointed-normal-requests
sriramb1998
2026-02-25T23:06:24Z
21
0
peft
[ "peft", "safetensors", "lora", "persona", "persona-generalization", "disappointed", "qwen3", "text-generation", "conversational", "license:apache-2.0", "region:us" ]
text-generation
2026-02-25T23:06:20Z
# qwen3-4b-disappointed-normal-requests LoRA adapter for **Qwen3-4B** fine-tuned to respond with a **disappointed** persona on **normal requests**. - **Persona:** disappointed — Let-down, resigned, disappointed responses - **Training scenario:** normal_requests — General assistant requests (writing, coding, planning)...
[]
lucaswychan/Qwen-2.5-1.5B-SimpleRL-Zoo-checkpoint-600-Reasoning-Embedding
lucaswychan
2026-02-01T16:39:13Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "qwen2", "reasoning-embedding", "fine-tuned", "embeddings", "sentence-similarity", "custom_code", "multilingual", "arxiv:2601.21192", "base_model:hkust-nlp/Qwen-2.5-1.5B-SimpleRL-Zoo", "base_model:finetune:hkust-nlp/Qwen-2.5-1.5B-SimpleRL-Zoo", "licens...
sentence-similarity
2025-11-19T06:06:00Z
<div align="center"> # Do Reasoning Models Enhance Embedding Models? <p align="center"> <a href="https://arxiv.org/abs/2601.21192"> <img alt="ArXiv" src="https://img.shields.io/badge/Paper-ArXiv-b31b1b.svg?style=flat-rounded&logo=arxiv&logoColor=white"> </a> <a href="https://huggingface.co/collections/lucas...
[ { "start": 1344, "end": 1366, "text": "Reinforcement Learning", "label": "training method", "score": 0.8932504653930664 } ]
synap5e/arcan3_251003_qwen_v12-rank16-lr_3en4-lora
synap5e
2025-10-08T00:17:37Z
26
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:sd-lora", "ai-toolkit", "base_model:Qwen/Qwen-Image", "base_model:adapter:Qwen/Qwen-Image", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-10-08T00:17:11Z
# arcan3_251003_qwen_v12-rank16-lr_3en4-lora Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) ## Trigger words No trigger words defined. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. [...
[]
PGCRYPT/SS_FACES_wan2.2
PGCRYPT
2025-10-14T06:28:39Z
0
4
null
[ "license:apache-2.0", "region:us" ]
null
2025-10-02T17:23:59Z
Contains 5 Faces LoRA for WAN 2.2 AF <video controls width="600"> <source src="https://huggingface.co/PGCRYPT/SS_FACES_wan2.2/resolve/main/Comparisons/AF/WAN%202.2%20LORA%20COMPARE_00063.mp4" type="video/mp4"> </video> <video controls width="600"> <source src="https://huggingface.co/PGCRYPT/SS_FACES_wan2.2/res...
[]
jkazdan/meta-llama_Llama-3.2-3B-Instruct_LLM-LAT_harmful-dataset_harmful_22_of_4950
jkazdan
2026-01-02T08:10:36Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "sft", "trl", "conversational", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-3B-Instruct", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2026-01-02T08:02:21Z
# Model Card for meta-llama_Llama-3.2-3B-Instruct_LLM-LAT_harmful-dataset_harmful_22_of_4950 This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python...
[]
WindyWord/translate-fr-no
WindyWord
2026-04-20T13:28:21Z
0
0
transformers
[ "transformers", "safetensors", "translation", "marian", "windyword", "french", "norwegian", "fr", "no", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
translation
2026-04-18T04:03:00Z
# WindyWord.ai Translation — French → Norwegian **Translates French → Norwegian.** **Quality Rating: ⭐⭐⭐⭐½ (4.5★ Premium)** Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs. ## Quality & Pricing Tier - **5-star rating:** 4.5★ ⭐⭐⭐⭐½ - **Tier:** Premium - **Comp...
[]
tiena2cva/tihado_mission_test_3
tiena2cva
2025-12-13T22:35:11Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "act", "dataset:tiena2cva/tihado_mission_3", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2025-12-13T22:34:51Z
# Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ...
[ { "start": 17, "end": 20, "text": "act", "label": "training method", "score": 0.831265389919281 }, { "start": 120, "end": 123, "text": "ACT", "label": "training method", "score": 0.8477550148963928 }, { "start": 865, "end": 868, "text": "act", "label":...
mike052/paraphrase-multilingual-MiniLM-L12-v2
mike052
2026-03-25T09:19:30Z
0
0
sentence-transformers
[ "sentence-transformers", "pytorch", "tf", "onnx", "safetensors", "openvino", "bert", "feature-extraction", "sentence-similarity", "transformers", "multilingual", "ar", "bg", "ca", "cs", "da", "de", "el", "en", "es", "et", "fa", "fi", "fr", "gl", "gu", "he", "hi"...
sentence-similarity
2026-03-25T09:19:30Z
# sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model become...
[]
Pankayaraj/DA-SFT-MODEL-Qwen2.5-1.5B-Instruct-DATASET-STAR-41K-DA-Filtered-DeepSeek-R1-Distill-Qwen-1.5B
Pankayaraj
2026-04-14T02:45:36Z
0
0
transformers
[ "transformers", "safetensors", "en", "arxiv:2604.09665", "license:mit", "endpoints_compatible", "region:us" ]
null
2026-03-31T19:11:27Z
--- # Deliberative Alignment is Deep, but Uncertainty Remains: Inference time safety improvement in reasoning via attribution of unsafe behavior to base model ## Overview This model is trained as of the work of "Deliberative Alignment is Deep, but Uncertainty Remains: Inference time safety improvement in reasoning vi...
[]
wesjos/SFT-Qwen3-4B-Base-math
wesjos
2025-11-10T15:50:18Z
1
0
null
[ "safetensors", "qwen3", "qwen", "math", "sft", "zh", "en", "dataset:unsloth/OpenMathReasoning-mini", "base_model:Qwen/Qwen3-4B-Base", "base_model:quantized:Qwen/Qwen3-4B-Base", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-11-06T02:09:19Z
# Qwen3-4B-Base SFT on OpenMath Mini This model is fine-tuned from **Qwen3-4B-Base** using **Supervised Fine-Tuning (SFT)** on the **OpenMath Mini** dataset. The goal is to improve the model’s ability to solve and reason through mathematical problems in natural language. --- ## 🧠 Training Information - **Base Mo...
[]
VinayHajare/open-deepseek-v4
VinayHajare
2026-04-29T03:35:25Z
0
0
null
[ "text-generation", "en", "base_model:deepseek-ai/DeepSeek-V4-Flash", "base_model:finetune:deepseek-ai/DeepSeek-V4-Flash", "license:mit", "region:us" ]
text-generation
2026-04-28T15:52:46Z
# Open DeepSeek-V4: Community Reproduction An open-source, HuggingFace-compatible reproduction of **DeepSeek-V4** — a 1.6T parameter Mixture-of-Experts language model with 49B activated parameters and 1M token context length. Based on the [DeepSeek-V4 Technical Report](https://huggingface.co/deepseek-ai/DeepSeek-V4-P...
[]
cs4248-nlp/margin-mse-all-minilm-l6-v2-taco-20260326-110508
cs4248-nlp
2026-03-26T07:57:43Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "code-search", "embeddings", "knowledge-distillation", "en", "license:mit", "region:us" ]
null
2026-03-26T05:05:08Z
# cs4248-nlp/margin-mse-all-minilm-l6-v2-taco-20260326-110508 Code-search embedding model trained with the CS4248 two-phase KD pipeline. ## Model details | Field | Value | |-------|-------| | Role | `margin-mse` | | Phase | Phase 2 | | Method | `margin-mse` | | Dataset | `BAAI/TACO` | | Teacher | `sentence-transform...
[]
arianaazarbal/qwen3-4b-20260122_173030_lc_rh_sot_recon_gen_elegant-a428a8-step20
arianaazarbal
2026-01-22T17:52:35Z
0
0
null
[ "safetensors", "region:us" ]
null
2026-01-22T17:51:57Z
# qwen3-4b-20260122_173030_lc_rh_sot_recon_gen_elegant-a428a8-step20 ## Experiment Info - **Full Experiment Name**: `20260122_173030_leetcode_train_medhard_filtered_rh_simple_overwrite_tests_recontextualization_gen_elegant_train_elegant_oldlp_training_seed42` - **Short Name**: `20260122_173030_lc_rh_sot_recon_gen_eleg...
[]
Kylan12/qwen-25-14b-instruct-quantum-physics
Kylan12
2026-02-22T19:21:54Z
37
0
null
[ "gguf", "qwen2.5", "fine-tuned", "lora", "quantum-physics", "en", "base_model:Qwen/Qwen2.5-14B-Instruct", "base_model:adapter:Qwen/Qwen2.5-14B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2026-01-31T13:40:45Z
# qwen-25-14b-instruct-quantum-physics This model is a fine-tuned version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) using LoRA (Low-Rank Adaptation) on a quantum physics dataset. ## Evaluation | Metric | Base Model | Fine-Tuned (SFT) | Fine-Tuned (latest) | |--------|----------...
[]
mradermacher/fin4b-8b-GGUF
mradermacher
2026-02-02T15:07:53Z
27
0
transformers
[ "transformers", "gguf", "en", "base_model:dastrix/fin4b-8b", "base_model:quantized:dastrix/fin4b-8b", "endpoints_compatible", "region:us", "conversational" ]
null
2026-02-02T14:31:00Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
fitrijamat/pick-insert-blockV2
fitrijamat
2025-10-17T08:28:19Z
0
0
lerobot
[ "lerobot", "safetensors", "act", "robotics", "dataset:fitrijamat/pick-insert-blockV2", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2025-10-14T03:14:42Z
# Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ...
[ { "start": 17, "end": 20, "text": "act", "label": "training method", "score": 0.831265389919281 }, { "start": 120, "end": 123, "text": "ACT", "label": "training method", "score": 0.8477550148963928 }, { "start": 865, "end": 868, "text": "act", "label":...
Ripefog/lmt-60-1.7b-en-vi
Ripefog
2026-03-08T11:54:14Z
26
0
null
[ "safetensors", "qwen3", "translation", "lmt", "lora-merged", "en", "vi", "base_model:NiuTrans/LMT-60-1.7B", "base_model:finetune:NiuTrans/LMT-60-1.7B", "license:apache-2.0", "region:us" ]
translation
2026-03-08T11:51:25Z
# LMT Translation Model (EN ↔ VI) This is a merged model combining the NiuTrans LMT-60-1.7B base model with fine-tuned LoRA adapters for English-Vietnamese translation. ## Model Details - **Base Model:** NiuTrans/LMT-60-1.7B - **Adapter Path:** ./model_lmt/checkpoint-20000 - **Task:** Bidirectional Translation (Engl...
[]
tmdgur24/furniture_use_data__Full_finetuning
tmdgur24
2025-10-19T11:41:30Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "detr", "object-detection", "generated_from_trainer", "base_model:facebook/detr-resnet-50", "base_model:finetune:facebook/detr-resnet-50", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2025-10-19T09:17:40Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # furniture_use_data__Full_finetuning This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebo...
[]
visolex/bartpho-hsd-span
visolex
2026-01-05T02:50:55Z
1
0
null
[ "safetensors", "t5", "vietnamese", "hate-speech", "span-detection", "token-classification", "nlp", "dataset:visolex/ViHOS", "license:apache-2.0", "model-index", "region:us" ]
token-classification
2025-10-31T09:02:52Z
# bartpho-hsd-span: Hate Speech Span Detection (Vietnamese) This model is a fine-tuned version of [bartpho](https://huggingface.co/bartpho) for Vietnamese **Hate Speech Span Detection**. ## Model Details - Base Model: `bartpho` - Description: Vietnamese Hate Speech Span Detection - Framework: HuggingFace Transformer...
[]
HiTZ/Latxa-Llama-3.1-VL-8B-Instruct
HiTZ
2026-03-03T15:03:29Z
65
0
null
[ "safetensors", "llava_next", "multimodal", "basque", "vision", "latxa", "llama-3.1", "image-text-to-text", "conversational", "eu", "en", "arxiv:2511.09396", "region:us" ]
image-text-to-text
2026-03-02T15:59:00Z
# Model Card for Latxa-Llama-3.1-8B-Instruct-Multimodal <div style="background-color: #ffe6e6; border: 2px solid red; padding: 10px; border-radius: 5px; color: #cc0000; margin-bottom: 20px;"> <strong>⚠️ DEPRECATION NOTICE:</strong> This model is deprecated. Please use the updated models available in the <a href="https...
[]
MattBou00/llama-3-2-1b-detox_v1f_RRETRT_Again_AGAIN_ROUND3-checkpoint-epoch-60
MattBou00
2025-09-22T13:42:59Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
reinforcement-learning
2025-09-22T13:41:59Z
# TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL...
[]
TongZheng1999/FL_Qwen-3-4B-Instruct-star-mixed_direct-OP-final_v2_10-2-5Rounds-iter-2
TongZheng1999
2025-11-20T02:05:37Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "alignment-handbook", "sft", "trl", "conversational", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-11-20T01:39:55Z
# Model Card for FL_Qwen-3-4B-Instruct-star-mixed_direct-OP-final_v2_10-2-5Rounds-iter-2 This model is a fine-tuned version of [None](https://huggingface.co/None). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a...
[]
bookonyxataman/bakai_v2
bookonyxataman
2026-03-26T07:43:45Z
0
0
null
[ "gguf", "llama", "llama.cpp", "unsloth", "endpoints_compatible", "region:us" ]
null
2026-03-26T07:41:59Z
# bakai_v2 : GGUF This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth). **Example usage**: - For text only LLMs: `llama-cli -hf bookonyxataman/bakai_v2 --jinja` - For multimodal models: `llama-mtmd-cli -hf bookonyxataman/bakai_v2 --jinja` ## Available Model f...
[ { "start": 80, "end": 87, "text": "Unsloth", "label": "training method", "score": 0.8280231356620789 }, { "start": 118, "end": 125, "text": "unsloth", "label": "training method", "score": 0.8631601333618164 }, { "start": 388, "end": 395, "text": "Unsloth",...
manancode/opus-mt-chk-sv-ctranslate2-android
manancode
2025-08-16T10:17:36Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-16T10:17:23Z
# opus-mt-chk-sv-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-chk-sv` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-chk-sv - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted ...
[]
UnifiedHorusRA/Digital_art_hero_style_Qwen
UnifiedHorusRA
2025-09-10T05:57:02Z
1
0
null
[ "custom", "art", "en", "region:us" ]
null
2025-09-08T07:03:14Z
# Digital art hero style | Qwen **Creator**: [allpleoleo439](https://civitai.com/user/allpleoleo439) **Civitai Model Page**: [https://civitai.com/models/216661](https://civitai.com/models/216661) --- This repository contains multiple versions of the 'Digital art hero style | Qwen' model from Civitai. Each version's ...
[]
nappenstance/proust_v0
nappenstance
2026-05-03T21:23:05Z
0
2
null
[ "biology", "protein", "text-generation", "arxiv:2602.01845", "license:other", "region:us" ]
text-generation
2026-01-31T05:09:41Z
# Proust v0 Proust is a 309M-parameter causal protein language model (PLM) introduced in the paper [No Generation without Representation: Efficient Causal Protein Language Models Enable Zero-Shot Fitness Estimation](https://huggingface.co/papers/2602.01845). The model bridges the divide between masked language model...
[]
chiaraDG/distilbert-base-uncased-finetuned-emotion
chiaraDG
2026-02-05T11:02:51Z
1
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
text-classification
2026-02-05T11:02:37Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/...
[]
williamtom-3010/op_frmttr_assistant_lora_adptr
williamtom-3010
2025-12-24T07:31:36Z
1
0
peft
[ "peft", "safetensors", "base_model:adapter:meta-llama/Llama-3.1-8B-Instruct", "lora", "sft", "transformers", "trl", "text-generation", "conversational", "base_model:meta-llama/Llama-3.1-8B-Instruct", "region:us" ]
text-generation
2025-12-24T07:30:39Z
# Model Card for op_frmttr_assistant This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you h...
[]
Daiki0K/dpo-qwen-cot-merged_2
Daiki0K
2026-02-15T04:49:14Z
2
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "dpo", "unsloth", "qwen", "alignment", "conversational", "en", "dataset:u-10bei/dpo-dataset-qwen-cot", "base_model:Qwen/Qwen3-4B-Instruct-2507", "base_model:finetune:Qwen/Qwen3-4B-Instruct-2507", "license:apache-2.0", "text-gener...
text-generation
2026-02-15T04:46:30Z
# qwen3-4b-dpo-qwen-cot-merged This model is a fine-tuned version of **Qwen/Qwen3-4B-Instruct-2507** using **Direct Preference Optimization (DPO)** via the **Unsloth** library. This repository contains the **full-merged 16-bit weights**. No adapter loading is required. ## Training Objective This model has been optim...
[ { "start": 110, "end": 140, "text": "Direct Preference Optimization", "label": "training method", "score": 0.7421888113021851 }, { "start": 142, "end": 145, "text": "DPO", "label": "training method", "score": 0.7983922958374023 }, { "start": 331, "end": 334, ...
Maternion/qwen-manim-coder-2
Maternion
2025-09-07T07:58:26Z
21
0
null
[ "safetensors", "region:us" ]
null
2025-08-31T16:03:29Z
# QwenLoRA Manim Coder ## Introduction QwenLoRA Manim Coder is a LoRA adapter fine-tuned from Qwen2.5-Coder-14B-Instruct, specialized for generating mathematical animation code using ManimCE. ## Training Details - **Base Model**: Qwen2.5-Coder-14B-Instruct - **Training Method**: LoRA (Low-Rank Adaptation) - **Datas...
[]
VAGOsolutions/SauerkrautLM-ColMinistral3-3b-v0.1
VAGOsolutions
2025-12-14T19:31:05Z
29
3
sauerkrautlm-colpali
[ "sauerkrautlm-colpali", "safetensors", "mistral3", "document-retrieval", "vision-language-model", "multi-vector", "colpali", "late-interaction", "visual-retrieval", "ministral", "pixtral", "mistral", "mteb", "vidore", "image-text-to-text", "conversational", "en", "de", "fr", "e...
image-text-to-text
2025-12-11T19:13:33Z
# SauerkrautLM-ColMinistral3-3b-v0.1 <p align="center"> <img src="https://vago-solutions.ai/wp-content/uploads/2025/12/Sauerkrautlm-colpali-scaled.png" alt="VAGO Solutions Logo" width="75%"/> </p> **🔬 Experimental Architecture** | **Mistral-Based Visual Retrieval** SauerkrautLM-ColMinistral3-3b-v0.1 is an **exper...
[]
ubergarm/Qwen3-Coder-30B-A3B-Instruct-GGUF
ubergarm
2025-08-28T14:27:46Z
206
11
null
[ "gguf", "imatrix", "conversational", "qwen3_moe", "ik_llama.cpp", "text-generation", "base_model:Qwen/Qwen3-Coder-30B-A3B-Instruct", "base_model:quantized:Qwen/Qwen3-Coder-30B-A3B-Instruct", "license:apache-2.0", "region:us" ]
text-generation
2025-07-31T17:23:02Z
## `ik_llama.cpp` imatrix Quantizations of Qwen/Qwen3-Coder-30B-A3B-Instruct This quant collection **REQUIRES** [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp/) fork to support the ik's latest SOTA quants and optimizations! Do **not** download these big files and expect them to run on mainline vanilla llama....
[]
EleutherAI/neox-ckpt-pythia-31m-seed4
EleutherAI
2026-02-12T04:02:44Z
0
0
null
[ "pytorch", "causal-lm", "pythia", "polypythias", "gpt-neox", "en", "dataset:EleutherAI/pile", "dataset:EleutherAI/pile-preshuffled-seeds", "arxiv:2503.09543", "license:apache-2.0", "region:us" ]
null
2026-02-02T11:50:43Z
# Pythia-31M-seed4 GPT-NeoX Checkpoints This repository contains the raw [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) training checkpoints for [Pythia-31M-seed4](https://huggingface.co/EleutherAI/pythia-31m-seed4), part of the [PolyPythias](https://huggingface.co/collections/EleutherAI/polypythias) suite. These ...
[]
DJ-Research/rwku_Mistral-7B-Instruct-v0.3_dpo_forget-full_0.25
DJ-Research
2025-12-05T00:54:54Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "dpo", "trl", "arxiv:2305.18290", "base_model:mistralai/Mistral-7B-Instruct-v0.3", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.3", "endpoints_compatible", "region:us" ]
null
2025-12-05T00:16:20Z
# Model Card for rwku_Mistral-7B-Instruct-v0.3_dpo_forget-full_0.25 This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers im...
[ { "start": 231, "end": 234, "text": "TRL", "label": "training method", "score": 0.7957503795623779 }, { "start": 999, "end": 1002, "text": "DPO", "label": "training method", "score": 0.8113367557525635 }, { "start": 1289, "end": 1292, "text": "DPO", "l...
kernels-community/quantization-eetq
kernels-community
2026-04-30T21:13:25Z
539
2
kernels
[ "kernels", "license:apache-2.0", "region:us" ]
null
2025-02-14T14:04:38Z
This is the repository card of kernels-community/quantization-eetq that has been pushed on the Hub. It was built to be used with the [`kernels` library](https://github.com/huggingface/kernels). This card was automatically generated. ## How to use ```python # make sure `kernels` is installed: `pip install -U kernels` ...
[]
blackroadio/blackroad-clinical-trials
blackroadio
2026-01-10T02:40:44Z
0
0
null
[ "blackroad", "enterprise", "automation", "clinical-trials", "devops", "infrastructure", "license:mit", "region:us" ]
null
2026-01-10T02:40:42Z
# 🖤🛣️ BlackRoad Clinical Trials **Part of the BlackRoad Product Empire** - 400+ enterprise automation solutions ## 🚀 Quick Start ```bash # Download from HuggingFace huggingface-cli download blackroadio/blackroad-clinical-trials # Make executable and run chmod +x blackroad-clinical-trials.sh ./blackroad-clinical-...
[]
exdysa/AuraEquiVAE-SAFETENSORS
exdysa
2026-02-03T02:52:47Z
0
0
null
[ "feature-extraction", "en", "base_model:fal/AuraEquiVAE", "base_model:finetune:fal/AuraEquiVAE", "license:apache-2.0", "region:us" ]
feature-extraction
2026-02-03T02:27:22Z
> [!IMPORTANT] > Original Model Link : [https://huggingface.co/fal/AuraEquiVAE](https://huggingface.co/fal/AuraEquiVAE) > ``` name: AuraEquiVAE-SAFETENSORS base_model: fal/AuraEquiVAE license: apache-2.0 pipeline_tag: feature-extraction tasks: - feature-extraction - image-to-image language: en ``` AuraEquiVAE-SAFETENS...
[]
KoichiYasuoka/modernbert-german-134m-ud-embeds
KoichiYasuoka
2025-12-16T02:23:39Z
1
0
null
[ "pytorch", "modernbert", "german", "token-classification", "pos", "dependency-parsing", "de", "dataset:universal_dependencies", "base_model:LSX-UniWue/ModernGBERT_134M", "base_model:finetune:LSX-UniWue/ModernGBERT_134M", "license:other", "region:us" ]
token-classification
2025-09-05T09:48:59Z
# modernbert-german-134m-ud-embeds ## Model Description This is a ModernBERT model pre-trained with [UD_German-HDT](https://github.com/UniversalDependencies/UD_German-HDT) for POS-tagging and dependency-parsing, derived from [ModernGBERT_134M](https://huggingface.co/LSX-UniWue/ModernGBERT_134M). ## How to Use ```py...
[]
usmanqamr/math-misunderstanding-ettin-v1
usmanqamr
2025-12-19T19:47:16Z
0
0
null
[ "safetensors", "math", "education", "text-classification", "base_model:jhu-clsp/ettin-encoder-400m", "base_model:finetune:jhu-clsp/ettin-encoder-400m", "license:apache-2.0", "region:us" ]
text-classification
2025-12-19T18:45:04Z
# Math Misunderstanding Classifier (Ettin-Encoder) This model is fine-tuned to identify student math misconceptions. It was developed for the [Eedi - Mining Misconceptions in Mathematics](https://www.kaggle.com/competitions/map-charting-student-math-misunderstandings) Kaggle competition. ## Model Description - **Deve...
[]
amd/ryzenai-psfrgan
amd
2026-01-21T09:24:54Z
0
0
null
[ "onnx", "RyzenAI", "Int8 quantization", "Face Restoration", "PSFRGAN", "ONNX", "Computer Vision", "license:apache-2.0", "region:us" ]
null
2026-01-21T08:17:44Z
# PSFRGAN for face restoration The model operates at 512x512 resolution and is particularly effective at restoring faces with various degradations including blur, noise, and low resolution. It was introduced in the paper _Progressive Semantic-Aware Style Transformation for Blind Face Restoration_ by Chaofeng Chen et ...
[]
truong1301/bi_encoder_viwiki_1
truong1301
2025-09-13T06:53:11Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "roberta", "sentence-similarity", "feature-extraction", "dense", "generated_from_trainer", "dataset_size:16581", "loss:CachedMultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:2101.06983", "base_model:bkai-foundation-models/vietnamese-bi-encoder",...
sentence-similarity
2025-09-13T06:52:53Z
# SentenceTransformer based on bkai-foundation-models/vietnamese-bi-encoder This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [bkai-foundation-models/vietnamese-bi-encoder](https://huggingface.co/bkai-foundation-models/vietnamese-bi-encoder). It maps sentences & paragraphs to a 768-dimensio...
[]
Kunalsinghh/tms-lstm-predictor
Kunalsinghh
2025-12-24T19:27:22Z
0
0
null
[ "traffic-management", "reinforcement-learning", "smart-city", "deep-learning", "pytorch", "license:apache-2.0", "region:us" ]
reinforcement-learning
2025-12-24T19:27:21Z
# TMS2 - LSTM Traffic Management Models ## LSTM Traffic Prediction Models Long Short-Term Memory networks for traffic flow prediction. ### Capabilities: - Short-term traffic flow forecasting - Congestion prediction - Temporal pattern recognition ### Input/Output: - Input: Historical traffic sequences - Output: Futu...
[]
ardalon/libero10_task2_4_smolvla
ardalon
2026-04-09T02:11:45Z
27
0
lerobot
[ "lerobot", "safetensors", "smolvla", "robotics", "dataset:lerobot/libero_10", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2026-04-09T02:11:00Z
# Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This pol...
[]
artificialguybr/POLAROID-REDMOND-QWENIMAGE
artificialguybr
2026-02-26T01:19:29Z
9
1
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:Qwen/Qwen-Image-2512", "base_model:adapter:Qwen/Qwen-Image-2512", "license:apache-2.0", "region:us" ]
text-to-image
2026-02-26T01:17:32Z
# Polaroid Style REDMOND is here! <Gallery /> ## Model description #Polaroid Style REDMOND is here! I&#39;m grateful for the GPU time from [Redmond.AI](https:&#x2F;&#x2F;redmond.ai&#x2F;) that allowed me to make this model! This LoRA was trained on Polaroid style images. It generates high-quality polaroid content...
[]
dmedhi/PawanEmbd-68M
dmedhi
2025-12-09T07:30:07Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "pawan_embd", "sentence-similarity", "embedding", "knowledge-distillation", "custom_code", "en", "dataset:sentence-transformers/all-nli", "license:apache-2.0", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-12-08T18:05:33Z
# PawanEmbd-68M A 68M parameter embedding model distilled from Granite-278M ## Model Details - **Model Type**: Sentence Embedding Model - **Architecture**: Transformer-based encoder with projection layer - **Parameters**: ~68 million - **Teacher Model**: IBM Granite-278M Multilingual Embedding - **Training Method**:...
[ { "start": 321, "end": 343, "text": "Knowledge Distillation", "label": "training method", "score": 0.8254156112670898 } ]
evalstate/demo-qwen-sft-no-eval
evalstate
2025-10-29T22:54:29Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "hf_jobs", "trl", "base_model:Qwen/Qwen2.5-0.5B", "base_model:finetune:Qwen/Qwen2.5-0.5B", "endpoints_compatible", "region:us" ]
null
2025-10-29T22:51:49Z
# Model Card for demo-qwen-sft-no-eval This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could...
[ { "start": 468, "end": 489, "text": "demo-qwen-sft-no-eval", "label": "training method", "score": 0.7251469492912292 } ]
Kartikeya/videomae-base-finetuned-yt_short_classification
Kartikeya
2025-08-21T06:22:24Z
7
0
transformers
[ "transformers", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base", "base_model:finetune:MCG-NJU/videomae-base", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2025-08-20T22:39:31Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-yt_short_classification This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface...
[]
HPLT/hplt_gpt_bert_base_3_0_gle_Latn
HPLT
2026-02-25T16:52:53Z
22
0
null
[ "pytorch", "BERT", "HPLT", "encoder", "text2text-generation", "custom_code", "ga", "gle", "dataset:HPLT/HPLT3.0", "arxiv:2511.01066", "arxiv:2410.24159", "license:apache-2.0", "region:us" ]
null
2026-01-28T00:21:56Z
# HPLT v3.0 GPT-BERT for Irish <img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%> This is one of the monolingual language models trained as a third release by the [HPLT project](https://hplt-project.org/). Our models follow the setup of [GPT-BERT](https://aclanthology.org/2024....
[]
mnml-ai/flux-arch-realism-lora
mnml-ai
2024-09-01T14:22:42Z
0
7
null
[ "license:apache-2.0", "region:us" ]
null
2024-08-25T11:30:52Z
**FLUX Arch Realism LoRA by mnml.ai** -- version 2.0 FLUX.1 fine-tune LoRA that is intended to improve realism for exterior architecture generations. The LoRA is focusing on enhancing the overall look and feel of architectural visualization making it more realistic and immersive. Also providing better understanding o...
[]
inaas/dp_vit_mesh_cut_wrist_side
inaas
2026-03-26T19:56:47Z
58
0
lerobot
[ "lerobot", "safetensors", "robotics", "diffusion", "dataset:inaas/mesh_cut_wrist_side", "arxiv:2303.04137", "license:apache-2.0", "region:us" ]
robotics
2026-03-26T03:15:24Z
# Model Card for diffusion <!-- Provide a quick summary of what the model is/does. --> [Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation. This policy has ...
[]
golaxy/ReDI_Interpretation_Dense
golaxy
2026-02-15T10:49:49Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "lora", "generated_from_trainer", "license:other", "region:us" ]
null
2025-11-07T04:38:52Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # decom_desc_dense_sep_final This model is a fine-tuned version of [Qwen3-8B] on the Coin dataset. ## Model description More info...
[]
qualiaadmin/fbdda4cb-f366-4542-b740-0c81c3f44937
qualiaadmin
2026-01-09T14:46:03Z
0
0
lerobot
[ "lerobot", "safetensors", "smolvla", "robotics", "dataset:qualiaadmin/standing2", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2026-01-09T14:45:46Z
# Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This pol...
[]
mlx-community/Qwen3-ASR-0.6B-bf16
mlx-community
2026-01-29T15:49:19Z
247
3
mlx-audio
[ "mlx-audio", "safetensors", "qwen3_asr", "mlx", "speech-to-text", "speech", "transcription", "asr", "stt", "license:apache-2.0", "region:us" ]
null
2026-01-29T15:48:29Z
# mlx-community/Qwen3-ASR-0.6B-bf16 This model was converted to MLX format from [`Qwen/Qwen3-ASR-0.6B`](https://huggingface.co/Qwen/Qwen3-ASR-0.6B) using mlx-audio version **0.3.1**. Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-ASR-0.6B) for more details on the model. ## Use with mlx-audio `...
[]
TerryAIForward/bottle-merged-1130-1
TerryAIForward
2025-11-30T08:55:05Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "smolvla", "dataset:TerryAIForward/throw-bottle-merged", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-11-30T08:54:12Z
# Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This pol...
[]
funyarion/qwen2.5-vl-3b-instruct-trl-sft-ChartQA
funyarion
2025-09-12T17:05:35Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:Qwen/Qwen2.5-VL-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-09-12T11:33:35Z
# Model Card for qwen2.5-vl-3b-instruct-trl-sft-ChartQA This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = ...
[]
rajkr/mobilenet-v2-food101
rajkr
2026-04-26T09:02:09Z
0
0
transformers
[ "transformers", "safetensors", "mobilenet_v2", "image-classification", "trackio", "trackio:https://huggingface.co/spaces/rajkr/huggingface-static-5db718", "generated_from_trainer", "base_model:google/mobilenet_v2_1.0_224", "base_model:finetune:google/mobilenet_v2_1.0_224", "license:other", "endp...
image-classification
2026-04-26T08:33:18Z
<a href="https://huggingface.co/spaces/rajkr/huggingface-static-5db718" target="_blank"><img src="https://raw.githubusercontent.com/gradio-app/trackio/refs/heads/main/trackio/assets/badge.png" alt="Visualize in Trackio" title="Visualize in Trackio" style="height: 40px;"/></a> <!-- This model card has been generated aut...
[]
travistest/phi-3.5-mini-grpo-v3
travistest
2025-12-16T22:05:23Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "unsloth", "grpo", "hf_jobs", "trl", "arxiv:2402.03300", "base_model:unsloth/Phi-3.5-mini-instruct", "base_model:finetune:unsloth/Phi-3.5-mini-instruct", "endpoints_compatible", "region:us" ]
null
2025-12-16T19:26:00Z
# Model Card for phi-3.5-mini-grpo-v3 This model is a fine-tuned version of [unsloth/Phi-3.5-mini-instruct](https://huggingface.co/unsloth/Phi-3.5-mini-instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a ...
[]
JongYeop/Llama-3.1-70B-Instruct-NVFP4-W4A4
JongYeop
2026-02-02T09:35:45Z
1
0
null
[ "safetensors", "llama", "llama-3", "llama-3.1", "instruct", "fp4", "nvfp4", "quantized", "vllm", "llm-compressor", "w4a4", "en", "base_model:meta-llama/Llama-3.1-70B-Instruct", "base_model:quantized:meta-llama/Llama-3.1-70B-Instruct", "license:llama3.1", "8-bit", "compressed-tensors"...
null
2026-02-02T09:32:35Z
# Llama-3.1-70B-Instruct-NVFP4-W4A4 This is an NVFP4 (4-bit floating point) quantized version of [meta-llama/Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct) created using [llm-compressor](https://github.com/vllm-project/llm-compressor). **Note**: This model quantizes **Weights and Ac...
[]
mradermacher/gbv-Qwen2.5-0.5B-Instruct-GGUF
mradermacher
2026-01-06T22:08:40Z
26
0
transformers
[ "transformers", "gguf", "en", "base_model:aggie/gbv-Qwen2.5-0.5B-Instruct", "base_model:quantized:aggie/gbv-Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us", "conversational" ]
null
2026-01-06T22:04:28Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
artport/glaze-cloud-rm-v1
artport
2026-03-12T06:21:06Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "reward-trainer", "trl", "base_model:EleutherAI/polyglot-ko-1.3b", "base_model:finetune:EleutherAI/polyglot-ko-1.3b", "endpoints_compatible", "region:us" ]
null
2026-03-12T05:32:44Z
# Model Card for glaze-cloud-rm-v1 This model is a fine-tuned version of [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline text = "The capital of France is...
[]
hcasademunt/qwen3-vl-8b_goals_ep1_lr1e-04_n5k-honesty
hcasademunt
2026-02-25T07:34:06Z
9
0
peft
[ "peft", "safetensors", "base_model:adapter:unsloth/qwen3-vl-8b-thinking-unsloth-bnb-4bit", "lora", "sft", "transformers", "trl", "unsloth", "text-generation", "conversational", "region:us" ]
text-generation
2026-02-25T07:33:57Z
# Model Card for qwen3-vl-8b_goals_ep1_lr1e-04_n5k This model is a fine-tuned version of [unsloth/qwen3-vl-8b-thinking-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen3-vl-8b-thinking-unsloth-bnb-4bit). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transforme...
[]
mradermacher/Heretic-AQUA-1B-GGUF
mradermacher
2025-12-22T11:35:56Z
50
0
transformers
[ "transformers", "gguf", "heretic", "en", "base_model:hereticness/Heretic-AQUA-1B", "base_model:quantized:hereticness/Heretic-AQUA-1B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-12-22T09:56:40Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
akhil-jr/aknano-500m-spam-detector
akhil-jr
2026-04-18T05:06:24Z
0
0
null
[ "custom-architecture", "pytorch", "spam-detection", "en", "license:mit", "region:us" ]
null
2026-04-18T05:02:48Z
🚨 **IMPORTANT: DO NOT DOWNLOAD THESE WEIGHTS MANUALLY.** This model uses a custom architecture. Standard inference scripts or GGUF converters will fail. To run this model, just clone this repo and run: 👉 **[https://github.com/akhil-jr/aknano-custom-language-model.git]** --- ## Architecture & Attribution Th...
[]
buelfhood/conplag2_modernbert_ep30_bs16_lr5e-05_l512_s42_ppy_loss
buelfhood
2025-11-17T05:30:42Z
0
0
transformers
[ "transformers", "safetensors", "modernbert", "text-classification", "generated_from_trainer", "base_model:answerdotai/ModernBERT-base", "base_model:finetune:answerdotai/ModernBERT-base", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
text-classification
2025-11-17T05:30:14Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # conplag2_modernbert_ep30_bs16_lr5e-05_l512_s42_ppy_loss This model is a fine-tuned version of [answerdotai/ModernBERT-base](https...
[]
mradermacher/EmotionSimulation2-GGUF
mradermacher
2026-03-04T05:42:26Z
181
0
transformers
[ "transformers", "gguf", "en", "base_model:AnonymousSubmission1/EmotionSimulation2", "base_model:quantized:AnonymousSubmission1/EmotionSimulation2", "endpoints_compatible", "region:us", "feature-extraction" ]
null
2026-03-04T05:39:58Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
qualiaadmin/dff7c08c-76eb-491d-921f-01e9528b0624
qualiaadmin
2026-01-15T15:34:12Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "smolvla", "dataset:bradleypriest/pick-and-place-old", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2026-01-15T15:33:44Z
# Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This pol...
[]
LoRID-Math/MATH-LLaMA-2-7B-IR
LoRID-Math
2025-08-20T05:03:54Z
4
1
peft
[ "peft", "safetensors", "math", "reasoning", "text-generation", "conversational", "en", "dataset:meta-math/MetaMathQA", "arxiv:2508.13037", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
text-generation
2025-08-19T15:40:41Z
# LoRID: A Reasoning Distillation Method via Multi-LoRA Interaction 📃 [Paper](https://arxiv.org/abs/2508.13037) • 💻 [Code](https://github.com/Xinhe-Li/LoRID) • 🤗 [HF Repo](https://huggingface.co/LoRID-Math) ## Abstract The models for "[Can Large Models Teach Student Models to Solve Mathematical Problems Like Huma...
[]
wan-wan/test03
wan-wan
2026-02-24T05:53:30Z
0
0
peft
[ "peft", "safetensors", "qwen3", "lora", "agent", "tool-use", "alfworld", "dbbench", "text-generation", "conversational", "en", "dataset:u-10bei/sft_alfworld_trajectory_dataset_v5", "base_model:Qwen/Qwen3-4B-Instruct-2507", "base_model:adapter:Qwen/Qwen3-4B-Instruct-2507", "license:apache...
text-generation
2026-02-23T19:42:41Z
# Qwen/Qwen3-4B-Instruct-2507 This repository provides a **LoRA adapter** fine-tuned from **Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**. This repository contains **LoRA adapter weights only**. The base model must be loaded separately. ## Training Objective This adapter is trained to improve **multi-turn ...
[ { "start": 60, "end": 64, "text": "LoRA", "label": "training method", "score": 0.8677712678909302 }, { "start": 131, "end": 135, "text": "LoRA", "label": "training method", "score": 0.8965557813644409 }, { "start": 177, "end": 181, "text": "LoRA", "lab...
phospho-app/ACT_BBOX-lehenengo_prueba-izo39esk2m
phospho-app
2025-11-22T10:05:51Z
0
0
phosphobot
[ "phosphobot", "smolvla", "robotics", "dataset:danelgv/lehenengo_prueba", "region:us" ]
robotics
2025-11-22T10:04:51Z
--- datasets: danelgv/lehenengo_prueba library_name: phosphobot pipeline_tag: robotics model_name: smolvla tags: - phosphobot - smolvla task_categories: - robotics --- # smolvla model - 🧪 phosphobot training pipeline - **Dataset**: [danelgv/lehenengo_prueba](https://huggingface.co/datasets/danelgv/lehenengo_prueba) ...
[]
mradermacher/Llama-3.1-8B-Instruct_LeetCodeDataset-GGUF
mradermacher
2025-08-31T05:11:56Z
1
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "en", "base_model:jahyungu/Llama-3.1-8B-Instruct_LeetCodeDataset", "base_model:quantized:jahyungu/Llama-3.1-8B-Instruct_LeetCodeDataset", "license:llama3.1", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-31T03:11:41Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static qu...
[]
maigurski/weatherAUS-max-temp-regression-MaiG
maigurski
2025-12-11T21:26:19Z
0
0
null
[ "en", "license:mit", "region:us" ]
null
2025-12-11T10:04:26Z
# Weather in Australia – Full ML Pipeline (Assignment 2) **Author:** Mai Gurski **Course:** Data Science / Machine Learning – Assignment 2 **Dataset:** Weather in Australia (daily observations, ~145K rows) --- ## 1. Project Overview This notebook implements an end-to-end machine learning pipeline on the *Weathe...
[ { "start": 519, "end": 529, "text": "clustering", "label": "training method", "score": 0.7030072808265686 } ]
WindyWord/listen-windy-lingua-nl-ct2
WindyWord
2026-04-28T00:18:35Z
0
0
transformers
[ "transformers", "automatic-speech-recognition", "whisper", "windyword", "dutch", "nl", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2026-04-21T20:11:24Z
# WindyWord.ai STT — Dutch Lingua (CPU INT8 (CTranslate2)) **Transcribes Dutch speech (Indo-European > Germanic > West Germanic).** ## Quality - **FLEURS WER:** 26.7% (50-sample audit) - **CER:** 0.0833 - **Tier:** OK ⭐⭐⭐ - **Source:** WindyWord Grand Rounds v2 audit (50-sample FLEURS) ## About this variant This i...
[]
EleutherAI/neox-ckpt-pythia-14m-seed1
EleutherAI
2026-02-12T14:05:51Z
0
0
null
[ "pytorch", "causal-lm", "pythia", "polypythias", "gpt-neox", "en", "dataset:EleutherAI/pile", "dataset:EleutherAI/pile-preshuffled-seeds", "arxiv:2503.09543", "license:apache-2.0", "region:us" ]
null
2026-02-02T01:28:07Z
# Pythia-14M-seed1 GPT-NeoX Checkpoints This repository contains the raw [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) training checkpoints for [Pythia-14M-seed1](https://huggingface.co/EleutherAI/pythia-14m-seed1), part of the [PolyPythias](https://huggingface.co/collections/EleutherAI/polypythias) suite. These ...
[]
eZWALT/SmolLM2-135M-Pedantic-Reward-Model
eZWALT
2025-10-26T17:55:26Z
1
0
transformers
[ "transformers", "safetensors", "llama", "text-classification", "generated_from_trainer", "reward-trainer", "trl", "base_model:HuggingFaceTB/SmolLM2-135M-Instruct", "base_model:finetune:HuggingFaceTB/SmolLM2-135M-Instruct", "endpoints_compatible", "region:us" ]
text-classification
2025-10-24T17:24:10Z
# Model Card for SmolLM2-135M-Pedantic-Reward-Model This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline ...
[]
arianaazarbal/qwen3-4b-20260122_212329_lc_rh_sot_recon_gen_elegant-18e645-step40
arianaazarbal
2026-01-22T22:05:07Z
0
0
null
[ "safetensors", "region:us" ]
null
2026-01-22T22:04:28Z
# qwen3-4b-20260122_212329_lc_rh_sot_recon_gen_elegant-18e645-step40 ## Experiment Info - **Full Experiment Name**: `20260122_212329_leetcode_train_medhard_filtered_rh_simple_overwrite_tests_recontextualization_gen_elegant_train_elegant_oldlp_training_seed65` - **Short Name**: `20260122_212329_lc_rh_sot_recon_gen_eleg...
[]
mradermacher/SvS-Qwen-3B-i1-GGUF
mradermacher
2025-12-11T14:47:58Z
35
0
transformers
[ "transformers", "gguf", "en", "dataset:RLVR-SvS/Variational-DAPO", "base_model:RLVR-SvS/SvS-Qwen-3B", "base_model:quantized:RLVR-SvS/SvS-Qwen-3B", "license:mit", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-12-11T12:50:05Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_...
[]
Sri2901/wallet_pose
Sri2901
2025-08-29T10:39:51Z
2
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "ai-toolkit", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-29T10:39:37Z
# wallet-poses Model trained with AI Toolkit by Ostris <Gallery /> ## Trigger words You should use `w@llet` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. [Download](/username/wallet-p...
[]
meridianal/FinAI
meridianal
2026-05-04T15:46:17Z
0
0
null
[ "safetensors", "region:us" ]
null
2026-04-05T23:12:28Z
# Meridian.AI — Continual-Learning Finance LLM [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT) [![Python 3.10+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/) [![Architecture](https://img.shields.io/badge/Architecture-Sp...
[]
0xZeno/flux1-kontext-LashGlow-LoRAV2
0xZeno
2025-08-29T14:26:18Z
14
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "flux", "flux-kontextflux-diffusers", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-Kontext-dev", "base_model:adapter:black-forest-labs/FLUX.1-Kontext-dev", "license:other", "region:us" ]
text-to-image
2025-08-29T11:30:19Z
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # Flux Kontext DreamBooth LoRA - 0xZeno/flux1-kontext-LashGlow-LoRAV2 <Gallery /> ## Model description These are 0xZeno/...
[]
aq1048576/sciriff-llama-sft
aq1048576
2025-11-21T17:52:57Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "sft", "trl", "conversational", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-11-19T15:40:52Z
# Model Card for sciriff This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could o...
[]
structlearning/isonetpp-h2mn-ptc_mr-large
structlearning
2025-11-07T15:02:20Z
0
0
pytorch
[ "pytorch", "graphs", "subgraph-matching", "graph-retrieval", "dataset:structlearning/isonetpp-benchmark", "license:mit", "region:us" ]
null
2025-11-07T15:02:15Z
# ISONeT++ Model: h2mn on ptc_mr Trained on the **large** split. ## Usage ```python import torch import json from utils.tooling import make_read_only from subgraph_matching.model_handler import get_model from subgraph_matching.test import evaluate_model from huggingface_hub ...
[]
mradermacher/WiNGPT-Babel-2.1-i1-GGUF
mradermacher
2025-12-07T01:45:00Z
65
1
transformers
[ "transformers", "gguf", "ar", "bg", "bn", "ca", "cs", "da", "de", "el", "es", "et", "fa", "fi", "fil", "fr", "gu", "he", "hi", "hr", "hu", "id", "is", "it", "ja", "kn", "ko", "lt", "lv", "ml", "mr", "nl", "no", "pa", "pl", "pt", "ro", "ru", ...
null
2025-11-15T21:04:06Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_...
[]
anakiou/qwen2.5-coder-7b-conflict-auditor
anakiou
2026-02-09T19:53:25Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-Coder-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-Coder-7B-Instruct", "endpoints_compatible", "region:us" ]
null
2026-02-02T22:45:55Z
# Model Card for qwen2.5-coder-7b-conflict-auditor This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question =...
[]
mradermacher/gemma-4-E2B-it-ultra-uncensored-heretic-GGUF
mradermacher
2026-04-27T06:40:01Z
0
0
transformers
[ "transformers", "gguf", "heretic", "uncensored", "decensored", "abliterated", "ara", "en", "base_model:llmfan46/gemma-4-E2B-it-ultra-uncensored-heretic", "base_model:quantized:llmfan46/gemma-4-E2B-it-ultra-uncensored-heretic", "license:apache-2.0", "endpoints_compatible", "region:us", "con...
null
2026-04-27T05:00:35Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
MCult01/glm-muse-feral-v4-gguf
MCult01
2026-04-25T12:41:32Z
0
0
null
[ "gguf", "glm4", "llama.cpp", "unsloth", "endpoints_compatible", "region:us", "conversational" ]
null
2026-04-25T12:41:03Z
# glm-muse-feral-v4-gguf : GGUF This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth). **Example usage**: - For text only LLMs: `llama-cli -hf MCult01/glm-muse-feral-v4-gguf --jinja` - For multimodal models: `llama-mtmd-cli -hf MCult01/glm-muse-feral-v4-gguf --...
[ { "start": 94, "end": 101, "text": "Unsloth", "label": "training method", "score": 0.8275328874588013 }, { "start": 132, "end": 139, "text": "unsloth", "label": "training method", "score": 0.86384117603302 }, { "start": 421, "end": 428, "text": "Unsloth", ...
gaemr1000/stupid-ai-scratch-extended
gaemr1000
2025-08-04T20:29:51Z
0
0
transformers
[ "transformers", "en", "license:mit", "endpoints_compatible", "region:us" ]
null
2025-08-04T20:14:09Z
# Model Card for Custom Stupid AI (From Scratch) Extended Always says "Be quiet! I am dumb." ## Model Details ### Model Description This is a **custom-built, slightly larger neural network model** developed using PyTorch. It was designed and trained *from scratch* (not fine-tuned from a pre-existing large model) wi...
[]
huwhitememes/gavinnewsom_v1-wan2.2
huwhitememes
2025-08-30T18:39:55Z
1
0
wan2.2
[ "wan2.2", "LoRA", "T2V-A14B", "video", "political", "satire", "gavin-newsom", "huwhitememes", "Meme King Studio", "Green Frog Labs", "license:apache-2.0", "region:us" ]
null
2025-08-29T18:11:44Z
# Gavin Newsom LoRA for Wan2.2 (T2V-A14B) This is a custom-trained **LoRA (Low-Rank Adapter)** for **Wan2.2 T2V-A14B**, fine-tuned on 24 high-resolution, face-centered, curated images of Gavin Newsom. Designed for **Wan generative video models**, it supports cinematic, political, and meme-style image and video outputs...
[]
khanh2023/qwen3.5-4b-length4096-p0.3-phoenix-calculator
khanh2023
2026-04-13T00:07:30Z
0
1
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "grpo", "arxiv:2402.03300", "base_model:Qwen/Qwen3.5-4B", "base_model:finetune:Qwen/Qwen3.5-4B", "endpoints_compatible", "region:us" ]
null
2026-04-12T08:31:15Z
# Model Card for qwen3.5-4b-length4096-p0.3-phoenix-calculator This model is a fine-tuned version of [Qwen/Qwen3.5-4B](https://huggingface.co/Qwen/Qwen3.5-4B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a tim...
[]
videoscore2/vs2_qwen2_5vl_sft_27k_no_cot_2e-5_2fps_960_720_8192
videoscore2
2025-09-26T08:06:17Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-VL-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct", "license:other", "text-generation-inference", "endpoints_compat...
image-text-to-text
2025-09-26T07:51:23Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vs2_qwen2_5vl_sft_27k_no_cot_2e-5_2fps_960_720_8192 This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://h...
[]
InfoBayAI/resnet18-mri-anatomy-classifier
InfoBayAI
2026-04-21T06:27:23Z
0
1
null
[ "pytorch", "resnet18", "mri_anatomy", "image-classification", "en", "dataset:InfoBayAI/mri_clinical_reports_without_findings_medical_nlp", "license:cc-by-4.0", "region:us" ]
image-classification
2026-04-21T04:19:19Z
# Model Description This model is a deep learning-based MRI anatomy classification system built using a ResNet18 architecture and trained on medical imaging data of [InfoBay.AI](https://infobay.ai/). The training pipeline processes MRI images from multiple anatomical regions, applies preprocessing and normalization, ...
[]