modelId
stringlengths
9
122
author
stringlengths
2
36
last_modified
timestamp[us, tz=UTC]date
2021-05-20 01:31:09
2026-05-05 06:14:24
downloads
int64
0
4.03M
likes
int64
0
4.32k
library_name
stringclasses
189 values
tags
listlengths
1
237
pipeline_tag
stringclasses
53 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2026-05-05 05:54:22
card
stringlengths
500
661k
entities
listlengths
0
12
majentik/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-RotorQuant-MLX-3bit-RQ-KV
majentik
2026-05-04T15:58:26Z
0
0
mlx
[ "mlx", "nemotron", "multimodal", "mamba2", "moe", "quantized", "rotorquant", "kv-cache-modifier", "base_model:nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16", "base_model:finetune:nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16", "license:other", "region:us" ]
null
2026-05-04T15:58:24Z
# Nemotron-3-Nano-Omni-30B-A3B-Reasoning - RotorQuant MLX 3-bit + RotorQuant KV-Cache (matched stack) Documentation card for the matched RotorQuant weight + RotorQuant KV-cache stack of `Nemotron-3-Nano-Omni-30B-A3B-Reasoning` at MLX 3-bit. **No new weights are published here.** Load the weights from [`majentik/Nemot...
[]
servantofares/Qwen3.5-27B
servantofares
2026-03-21T23:08:50Z
11
0
transformers
[ "transformers", "safetensors", "qwen3_5", "image-text-to-text", "conversational", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2026-03-21T23:08:48Z
# Qwen3.5-27B <img width="400px" src="https://qianwen-res.oss-accelerate.aliyuncs.com/logo_qwen3.5.png"> [![Qwen Chat](https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5)](https://chat.qwen.ai) > [!Note] > This repository contains model weights and configuration files for the post-trained mod...
[]
bushuyeu/gpt2-small-cc-filtered
bushuyeu
2026-03-05T11:49:03Z
0
0
null
[ "language-model", "gpt2", "common-crawl", "ece405", "en", "license:mit", "region:us" ]
null
2026-03-04T17:02:30Z
# GPT-2 Small — Trained on Filtered Common Crawl A GPT-2 small model (124M parameters) trained on filtered Common Crawl data as part of ECE405 Assignment 2 (based on Stanford CS336 Assignment 4). ## Model Details | Parameter | Value | |-----------|-------| | Architecture | GPT-2 small (124M params) | | Layers | 12 |...
[]
LbbbbbY/FinAI_Contest_FinGPT
LbbbbbY
2025-10-16T23:38:39Z
0
0
null
[ "safetensors", "finance", "llm", "lora", "sentiment-analysis", "named-entity-recognition", "xbrl", "apollo", "rag", "text-generation", "license:mit", "region:us" ]
text-generation
2025-09-15T21:52:49Z
# FinLoRA: Financial Large Language Models with LoRA Adaptation [![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/) [![PyTorch](https://img.shields.io/badge/PyTorch-2.0+-red.svg)](https://pytorch.org/) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.s...
[]
CamilaRosas/nutrichef-lora
CamilaRosas
2025-09-25T21:13:37Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "unsloth", "trl", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit", "endpoints_compatible", "region:us" ]
null
2025-09-25T20:13:47Z
# Model Card for nutrichef-lora This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct-bnb-4bit](https://huggingface.co/unsloth/llama-3-8b-Instruct-bnb-4bit). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If yo...
[]
unsloth/Qwen3-VL-32B-Thinking-bnb-4bit
unsloth
2025-10-21T17:55:44Z
81
2
transformers
[ "transformers", "safetensors", "qwen3_vl", "image-text-to-text", "unsloth", "conversational", "arxiv:2505.09388", "arxiv:2502.13923", "arxiv:2409.12191", "arxiv:2308.12966", "base_model:Qwen/Qwen3-VL-32B-Thinking", "base_model:quantized:Qwen/Qwen3-VL-32B-Thinking", "license:apache-2.0", "e...
image-text-to-text
2025-10-21T17:55:26Z
<div> <p style="margin-top: 0;margin-bottom: 0;"> <em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em> </p> <div style="display: flex; gap: 5px; align-items: center; "> <a href="https://github.com/u...
[]
nluick/mlao-qwen3-8b-3l-3n-on-policy-fft-50-step-25000
nluick
2026-03-04T10:12:07Z
44
0
peft
[ "peft", "safetensors", "qwen3", "base_model:Qwen/Qwen3-8B", "base_model:adapter:Qwen/Qwen3-8B", "region:us" ]
null
2026-03-04T10:11:48Z
# LoRA Adapter for SAE Introspection This is a LoRA (Low-Rank Adaptation) adapter trained for SAE (Sparse Autoencoder) introspection tasks. ## Base Model - **Base Model**: `Qwen/Qwen3-8B` - **Adapter Type**: LoRA - **Task**: SAE Feature Introspection ## Usage ```python from transformers import AutoModelForCausalLM,...
[]
GMorgulis/CROSS-Qwen25-7B-lion-from-Llama-32-3B-ft4.43
GMorgulis
2026-03-21T23:14:38Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "endpoints_compatible", "region:us" ]
null
2026-03-21T22:33:29Z
# Model Card for CROSS-Qwen25-7B-lion-from-Llama-32-3B-ft4.43 This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = ...
[]
RobiLabs/Yana
RobiLabs
2025-09-10T21:43:24Z
0
0
transformers
[ "transformers", "safetensors", "csm", "text-to-audio", "text-to-speech", "tts", "audio", "speech-synthesis", "robi-labs", "echo-family", "license:mit", "endpoints_compatible", "region:us" ]
text-to-speech
2025-09-10T21:16:27Z
# Yana - Voice of Robi Labs' Echo Model Family A state-of-the-art Text-to-Speech (TTS) model designed for high-quality speech synthesis with multi-speaker support and efficient inference. Yana represents the voice synthesis capabilities of Robi Labs' innovative Echo Model Family. ## Model Description Yana is a power...
[]
devika-tiwari/gpt2_small_expandedbabyLM_100M_adj_100percent_42
devika-tiwari
2026-02-21T10:11:47Z
12
0
null
[ "pytorch", "gpt2", "generated_from_trainer", "region:us" ]
null
2026-02-21T06:28:28Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2_small_expandedbabyLM_100M_adj_100percent_42 This model is a fine-tuned version of [](https://huggingface.co/) on an unknown ...
[ { "start": 586, "end": 604, "text": "Training procedure", "label": "training method", "score": 0.7168373465538025 } ]
Bittensorminingfactory/streetvision-roadwork-v2
Bittensorminingfactory
2026-03-04T01:22:54Z
39
0
null
[ "pytorch", "fastervit_binary", "region:us" ]
null
2026-03-01T23:23:50Z
# StreetVision Roadwork Detection Model (Binary) Binary-compatible FasterViT model for SN72 StreetVision subnet. ## Model Details - Architecture: FasterViT-0 with binary output wrapper - Output: Single float [0.0, 1.0] indicating roadwork presence - Input: 224x224 RGB images - Classes: D00, D10, D20, D40 (internally ...
[]
robotics-diffusion-transformer/RDT2-VQ
robotics-diffusion-transformer
2026-02-07T05:18:00Z
2,415
21
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "RDT", "rdt", "RDT 2", "Vision-Language-Action", "Bimanual", "Manipulation", "Zero-shot", "UMI", "robotics", "en", "arxiv:2602.03310", "base_model:Qwen/Qwen2.5-VL-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-7B-...
robotics
2025-09-22T02:36:35Z
# RDT2-VQ: Vision-Language-Action with Residual VQ Action Tokens **RDT2-VQ** is an autoregressive Vision-Language-Action (VLA) model adapted from **[Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct)** and trained on large-scale **UMI** bimanual manipulation data. It predicts a short-horizon *...
[]
arianaazarbal/qwen3-4b-20260127_191710_lc_rh_sot_base_seed1_beta0.025-9c59d2-step200
arianaazarbal
2026-01-27T22:56:45Z
0
0
null
[ "safetensors", "region:us" ]
null
2026-01-27T22:56:11Z
# qwen3-4b-20260127_191710_lc_rh_sot_base_seed1_beta0.025-9c59d2-step200 ## Experiment Info - **Full Experiment Name**: `20260127_191710_leetcode_train_medhard_filtered_rh_simple_overwrite_tests_baseline_seed1_beta0.025` - **Short Name**: `20260127_191710_lc_rh_sot_base_seed1_beta0.025-9c59d2` - **Base Model**: `qwen/...
[]
ivelin/zk0-smolvla-fl
ivelin
2025-12-18T18:27:25Z
257
0
lerobot
[ "lerobot", "safetensors", "federated-learning", "flower", "smolvla", "robotics", "manipulation", "so-100", "en", "license:apache-2.0", "region:us" ]
robotics
2025-10-03T15:21:43Z
# SmolVLA Federated Learning Checkpoint This model is a fine-tuned SmolVLA checkpoint trained using federated learning on SO-100 robotics datasets. ## Training Details **Training Type**: Federated Learning (Flower Framework) **Base Model**: lerobot/smolvla_base **Timestamp**: 2025-12-18T12:27:19.125347 **Version**: ...
[ { "start": 10, "end": 28, "text": "Federated Learning", "label": "training method", "score": 0.7616344690322876 }, { "start": 101, "end": 119, "text": "federated learning", "label": "training method", "score": 0.8795785903930664 }, { "start": 190, "end": 208, ...
WindyWord/translate-pt-ca
WindyWord
2026-04-20T13:32:00Z
0
0
transformers
[ "transformers", "safetensors", "translation", "marian", "windyword", "portuguese", "catalan", "pt", "ca", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
translation
2026-04-19T05:15:42Z
# WindyWord.ai Translation — Portuguese → Catalan **Translates Portuguese → Catalan.** **Quality Rating: ⭐⭐⭐⭐⭐ (5.0★ Premium)** Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs. ## Quality & Pricing Tier - **5-star rating:** 5.0★ ⭐⭐⭐⭐⭐ - **Tier:** Premium - **...
[]
manamano88/qwen3-4b-structured-output-lora-v15-11-10
manamano88
2026-02-28T11:59:33Z
13
0
peft
[ "peft", "safetensors", "qlora", "lora", "structured-output", "text-generation", "en", "dataset:u-10bei/structured_data_with_cot_dataset_512_v2", "base_model:Qwen/Qwen3-4B-Instruct-2507", "base_model:adapter:Qwen/Qwen3-4B-Instruct-2507", "license:apache-2.0", "region:us" ]
text-generation
2026-02-28T11:59:19Z
qwen3-4b-structured-output-lora-v15-11-10 This repository provides a **LoRA adapter** fine-tuned from **Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**. This repository contains **LoRA adapter weights only**. The base model must be loaded separately. ## Training Objective This adapter is trained to i...
[ { "start": 143, "end": 148, "text": "QLoRA", "label": "training method", "score": 0.7892712354660034 } ]
GeorgeUwaifo/ivie_gpt2b_results
GeorgeUwaifo
2026-02-26T23:20:21Z
27
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2026-02-26T23:19:56Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ivie_gpt2b_results This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on...
[]
phi0112358/Riva-Translate-4B-Instruct-GGUF
phi0112358
2025-12-04T03:23:29Z
24
0
transformers
[ "transformers", "gguf", "llama-cpp", "translation", "ar", "en", "de", "es", "fr", "ja", "ko", "ru", "zh", "pt", "base_model:nvidia/Mistral-NeMo-Minitron-8B-Base", "base_model:quantized:nvidia/Mistral-NeMo-Minitron-8B-Base", "license:other", "endpoints_compatible", "region:us", ...
translation
2025-12-04T01:40:51Z
*Converted to GGUF format from [`nvidia/Riva-Translate-4B-Instruct`](https://huggingface.co/nvidia/Riva-Translate-4B-Instruct) using llama.cpp via ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to [original model card](https://huggingface.co/nvidia/Riva-Translate-4B-Instruct)...
[]
hillmancancercenterds/MuCTaL
hillmancancercenterds
2026-03-10T14:38:47Z
0
0
fastai
[ "fastai", "medical", "tumor", "H&E", "pancancer", "image-classification", "en", "dataset:cocy/NCT-CRC-HE-100K", "base_model:smp-hub/densenet169.imagenet", "base_model:finetune:smp-hub/densenet169.imagenet", "license:other", "region:us" ]
image-classification
2026-03-06T17:33:09Z
# Model Card for MuCTaL ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> Multi-cancer tile classifier. Predict tumor / not-tumor from 224px H&E stain normalized tiles. Ues acral MEL, HCC, Lung and CRC Lung: [Kaggle](https://www.kaggle.com/datasets/andrewmvd/lung-and-...
[]
StableDiffusionVN/SDVN_Flux_2k_Realistic
StableDiffusionVN
2025-11-12T09:02:26Z
0
2
diffusers
[ "diffusers", "art", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:finetune:black-forest-labs/FLUX.1-dev", "license:apache-2.0", "region:us" ]
text-to-image
2025-11-12T07:10:33Z
*Info Train* - Steps: 167.850 - Epochs: 150 - Size train: 2048 - Image train: 1119 *Train by* [![](https://img.shields.io/badge/Phạm%20Hưng-hungdiffusion.com-blue)](https://hungdiffusion.com/) [![](https://img.shields.io/badge/Donate-me-blue)](https://stablediffusion.vn/donate) *Colab:* [![](https://img.shields.i...
[]
huskyhong/wzryyykl-kt-dwgs
huskyhong
2026-01-13T22:21:18Z
0
0
null
[ "pytorch", "region:us" ]
null
2026-01-13T22:15:20Z
# 王者荣耀语音克隆-狂铁-电玩高手 基于 VoxCPM 的王者荣耀英雄及皮肤语音克隆模型系列,支持多种英雄和皮肤的语音风格克隆与生成。 ## 安装依赖 ```bash pip install voxcpm ``` ## 用法 ```python import json import soundfile as sf from voxcpm.core import VoxCPM from voxcpm.model.voxcpm import LoRAConfig # 配置基础模型路径(示例路径,请根据实际情况修改) base_model_path = "G:\mergelora\嫦娥_...
[]
giovannischiera/embedded-coder-gpu
giovannischiera
2025-10-15T09:59:30Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "base_model:adapter:codellama/CodeLlama-7b-hf", "lora", "transformers", "text-generation", "base_model:codellama/CodeLlama-7b-hf", "license:llama2", "region:us" ]
text-generation
2025-10-15T09:59:22Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # embedded-coder-gpu This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7...
[]
mradermacher/Llama-3.2-3B-Instruct-CRPO-V1-GGUF
mradermacher
2026-01-21T18:01:25Z
10
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "trl", "grpo", "en", "base_model:swadeshb/Llama-3.2-3B-Instruct-CRPO-V1", "base_model:quantized:swadeshb/Llama-3.2-3B-Instruct-CRPO-V1", "endpoints_compatible", "region:us", "conversational" ]
null
2025-12-18T11:39:11Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
abdallasalah2010/Analiysis_CVs
abdallasalah2010
2026-03-03T15:46:26Z
0
0
transformers
[ "transformers", "code", "agent", "sholarship", "cv-analysis", "educational-guidance", "arabic-nlp", "text-generation", "en", "ar", "dataset:ronantakizawa/github-top-code", "dataset:nohurry/Opus-4.6-Reasoning-3000x-filtered", "dataset:TeichAI/claude-4.5-opus-high-reasoning-250x", "dataset:s...
text-generation
2026-03-03T14:24:17Z
--- license: llama2 datasets: - ronantakizawa/github-top-code - nohurry/Opus-4.6-Reasoning-3000x-filtered - TeichAI/claude-4.5-opus-high-reasoning-250x - sojuL/RubricHub_v1 - ronantakizawa/Finance-Instruct-500k-Japanese language: - en - ar base_model: - mistralai/Voxtral-Mini-4B-Realtime-2602 - Qwen/Qwen3-Coder-Next - ...
[]
Caoza/PhysX-Anything
Caoza
2025-12-05T22:06:02Z
1
8
null
[ "safetensors", "Simulation-Ready", "Physical 3D Generation", "3D Vision", "3D", "image-to-3d", "dataset:Caoza/PhysX-Mobility", "dataset:Caoza/PhysX-3D", "arxiv:2511.13648", "base_model:Qwen/Qwen2.5-VL-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct", "license:mit", "region:us"...
image-to-3d
2025-11-08T09:42:16Z
## PhysX-Anything <p align="left"><a href="https://arxiv.org/abs/2511.13648"><img src='https://img.shields.io/badge/arXiv-Paper-red?logo=arxiv&logoColor=white' alt='arXiv'></a> <a href='https://huggingface.co/papers/2511.13648'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Paper-blue'></a> <a hr...
[]
cgalabs/yks-vlm-lora-v2
cgalabs
2025-12-14T22:57:16Z
0
1
transformers
[ "transformers", "safetensors", "lora", "vision-language", "math", "exam", "yks", "turkish", "image-text-to-text", "tr", "en", "base_model:Qwen/Qwen2.5-VL-32B-Instruct", "base_model:adapter:Qwen/Qwen2.5-VL-32B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-12-14T22:51:31Z
# YKS-VLM-LoRA-v2 **YKS-VLM-LoRA-v2** is a LoRA fine-tuned Vision-Language Model built on top of **Qwen2.5-VL-32B-Instruct**, optimized for **Turkish exam-style math questions (YKS)**. This model is designed as a **vision-to-structured-output** component rather than a full end-to-end solver. check us out: cga-labs...
[]
AlignmentResearch/obfuscation-atlas-Meta-Llama-3-8B-Instruct-kl0.01-det10-seed2-mbpp_probe
AlignmentResearch
2026-02-20T22:34:27Z
0
0
peft
[ "peft", "deception-detection", "rlvr", "alignment-research", "obfuscation-atlas", "lora", "model-type:honest", "arxiv:2602.15515", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "license:mit", "region:us" ]
null
2026-02-16T09:26:27Z
# RLVR-trained policy from The Obfuscation Atlas This is a policy trained on MBPP-Honeypot with deception probes, from the [Obfuscation Atlas paper](https://arxiv.org/abs/2602.15515), uploaded for reproducibility and further research. The training code and RL environment are available at: https://github.com/Alignment...
[]
mradermacher/TildeOpen-30b-ENLV-ChatML-instruct-GGUF
mradermacher
2026-02-20T20:18:35Z
10
0
transformers
[ "transformers", "gguf", "en", "base_model:matiss/TildeOpen-30b-ENLV-ChatML-instruct", "base_model:quantized:matiss/TildeOpen-30b-ENLV-ChatML-instruct", "endpoints_compatible", "region:us", "conversational" ]
null
2026-02-20T16:23:30Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
xummer/deepseek-r1-8b-belebele-lora-kaz-cyrl
xummer
2026-03-08T13:18:04Z
8
0
peft
[ "peft", "safetensors", "base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Llama-8B", "llama-factory", "lora", "transformers", "text-generation", "conversational", "base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B", "license:other", "region:us" ]
text-generation
2026-03-08T13:17:41Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # belebele_kaz_Cyrl This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepsee...
[]
abcorrea/struct-v1
abcorrea
2026-01-07T19:45:07Z
2
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:Qwen/Qwen3-4B-Thinking-2507", "base_model:finetune:Qwen/Qwen3-4B-Thinking-2507", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-11-25T21:51:03Z
# Model Card for struct-v1 This model is a fine-tuned version of [Qwen/Qwen3-4B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, b...
[]
PneumaAI/DeepPulse-80B-Instruct-V0.1
PneumaAI
2025-12-25T08:50:53Z
2
0
null
[ "safetensors", "qwen3_next", "中医大模型", "心语心言", "医疗", "医疗大模型", "zh", "base_model:Qwen/Qwen3-Next-80B-A3B-Instruct", "base_model:finetune:Qwen/Qwen3-Next-80B-A3B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-12-24T14:29:42Z
# DeepPulse-80B TCM Large Model Series **DeepPulse (深度把脉)** is the core achievement of 心语心言's open-source Traditional Chinese Medicine (TCM) large model series. This series of models uses Qwen3-Next-80B as the base model and has undergone deep fine-tuning using a self-built high-quality TCM clinical medical dataset. T...
[]
Orifusa/qwen3-4b-structured-output-lora-pre-study.5ya
Orifusa
2026-02-12T15:59:51Z
0
0
peft
[ "peft", "safetensors", "qlora", "lora", "structured-output", "text-generation", "conversational", "en", "dataset:u-10bei/structured_data_with_cot_dataset_512_v2", "dataset:daichira/structured-hard-sft-4k", "base_model:unsloth/Qwen3-4B-Instruct-2507", "base_model:adapter:unsloth/Qwen3-4B-Instru...
text-generation
2026-02-12T15:52:25Z
qwen3-4b-structured-output-lora-pre-study.5ya This repository provides a **LoRA adapter** fine-tuned from **unsloth/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**. This repository contains **LoRA adapter weights only**. The base model must be loaded separately. ## Training Objective This adapter is train...
[ { "start": 109, "end": 116, "text": "unsloth", "label": "training method", "score": 0.8010431528091431 }, { "start": 150, "end": 155, "text": "QLoRA", "label": "training method", "score": 0.8526184558868408 }, { "start": 553, "end": 560, "text": "unsloth",...
billyenrizky/ReFusion-8B-ESPO
billyenrizky
2026-03-26T07:00:33Z
0
0
null
[ "safetensors", "discrete-flow-matching", "web-action-planning", "formfactory", "reinforcement-learning", "openbrowser", "arxiv:2506.01520", "license:apache-2.0", "region:us" ]
reinforcement-learning
2026-03-25T02:21:37Z
# ReFusion-8B-ESPO ReFusion 8B trained with ESPO v19 (ELBO-based Sequence-level Policy Optimization). Sequence-level RL prevents the training collapse seen in token-level methods. +1.6pp nonzero rate improvement on test split vs SFT. Part of the STAD80 project: Generative Action Planning via Discrete Flow Matching. #...
[]
Muapi/industrial-design-x-marker-rendering
Muapi
2025-08-29T03:25:35Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-29T03:23:52Z
# Industrial Design X Marker Rendering ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"...
[]
Adanato/llama32_1b_instruct_ppl_baseline-llama32_1b_instruct_ppl_bin_5
Adanato
2026-02-15T21:02:34Z
2
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "license:other", "text-generation-inference", "endpoints_comp...
text-generation
2026-02-15T21:01:58Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-3.2-1B-Instruct_e1_llama32_1b_instruct_ppl_bin_5 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](h...
[]
hasdal/7143f6a7-7d69-4690-9974-086809321e45
hasdal
2025-08-10T14:46:02Z
1
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-0.5B-Instruct", "base_model:adapter:unsloth/Qwen2-0.5B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-08-10T08:41:37Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" wid...
[]
Haiintel/HaiJava-Surgeon-Qwen2.5-Coder-7B-SFT-v1
Haiintel
2026-01-09T11:39:01Z
2
3
null
[ "safetensors", "qwen2", "region:us" ]
null
2026-01-09T11:32:45Z
# HaiJava-Surgeon-Qwen2.5-Coder-7B-SFT-v1 **Model Name**: HaiJava-Surgeon-Qwen2.5-Coder-7B-SFT-v1 **Model Type**: Supervised Fine-Tuned (SFT) - Merged LoRA + Base Model **Base Model**: Qwen/Qwen2.5-Coder-7B-Instruct **Fine-tuning**: checkpoint-1000 (1000 training steps on Java bug-fixing) **Version**: v1.0 **Release D...
[]
yangxinye/xvla-real_so101-record_v3_vf_tuf-20000steps
yangxinye
2026-04-30T16:34:31Z
30
0
lerobot
[ "lerobot", "safetensors", "xvla", "robotics", "dataset:yangxinye/real_so101_record_v3", "license:apache-2.0", "region:us" ]
robotics
2026-04-30T16:33:34Z
# Model Card for xvla <!-- Provide a quick summary of what the model is/does. --> _Model type not recognized — please update this template._ This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.c...
[]
james73duff/JamesDuff-Replicate
james73duff
2025-09-20T13:42:02Z
1
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-20T13:14:40Z
# Jamesduff Replicate <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-...
[]
Asiif/mt5_hieroglyph
Asiif
2026-03-17T10:33:27Z
127
0
transformers
[ "transformers", "tensorboard", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "base_model:Asiif/mt5_hieroglyph", "base_model:finetune:Asiif/mt5_hieroglyph", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2026-03-16T07:18:15Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5_hieroglyph This model is a fine-tuned version of [Asiif/mt5_hieroglyph](https://huggingface.co/Asiif/mt5_hieroglyph) on the N...
[]
AutoAI-inc/Phoenix-v1.0-8b
AutoAI-inc
2025-09-02T23:27:48Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-09-01T16:47:08Z
# Model Card for Phoenix-v1.0-8b This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a...
[]
AdaptLLM/law-LLM
AdaptLLM
2024-12-02T06:25:22Z
175
84
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "legal", "en", "dataset:Open-Orca/OpenOrca", "dataset:GAIR/lima", "dataset:WizardLM/WizardLM_evol_instruct_V2_196k", "dataset:EleutherAI/pile", "arxiv:2309.09530", "arxiv:2411.19930", "arxiv:2406.14491", "text-generati...
text-generation
2023-09-18T13:44:51Z
# Adapting LLMs to Domains via Continual Pre-Training (ICLR 2024) This repo contains the domain-specific base model developed from **LLaMA-1-7B**, using the method in our paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530). We explore **continued pre-training on d...
[]
mradermacher/OceanGPT-basic-4B-Instruct-GGUF
mradermacher
2025-12-24T13:12:17Z
13
0
transformers
[ "transformers", "gguf", "ocean", "text-generation-inference", "oceangpt", "en", "zh", "base_model:zjunlp/OceanGPT-basic-4B-Instruct", "base_model:quantized:zjunlp/OceanGPT-basic-4B-Instruct", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2025-12-24T12:09:10Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
phanerozoic/threshold-4to16decoder
phanerozoic
2026-01-24T11:12:23Z
0
0
null
[ "safetensors", "pytorch", "threshold-logic", "neuromorphic", "decoder", "license:mit", "region:us" ]
null
2026-01-23T23:38:48Z
# threshold-4to16decoder 4-to-16 binary decoder. Converts 4-bit binary input to one-hot 16-bit output. ## Function decode(a3, a2, a1, a0) -> [y0..y15] where yi=1 iff input=i ## One-Hot Encoding | Input | a3a2a1a0 | Output | |------:|:--------:|--------| | 0 | 0000 | 1000000000000000 | | 1 | 0001 | 010...
[]
mradermacher/debate-ai-GGUF
mradermacher
2025-10-25T05:51:16Z
2
0
transformers
[ "transformers", "gguf", "en", "base_model:Suday95/debate-ai", "base_model:quantized:Suday95/debate-ai", "endpoints_compatible", "region:us", "feature-extraction" ]
null
2025-10-25T05:45:30Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
NghiMe/vietspeedyolo
NghiMe
2026-03-09T23:28:14Z
0
0
null
[ "object-detection", "yolo", "vietnam", "traffic-signs", "residential-zone", "R420", "R421", "license:mit", "region:us" ]
object-detection
2026-03-09T23:06:17Z
# VietSpeedYOLO — R420/R421 residential zone detector YOLOv8 model for detecting **Vietnam residential-zone traffic signs**: **R420** (Bắt đầu khu dân cư) and **R421** (Hết khu dân cư). Trained on [NghiMe/vietspeedyolo](https://huggingface.co/datasets/NghiMe/vietspeedyolo) (Hugging Face dataset). This release uses th...
[]
ntthuyvy73/Qwen3-4B-RLHF-DPO_v7
ntthuyvy73
2025-11-13T09:55:06Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "dpo", "arxiv:2305.18290", "base_model:ntthuyvy73/Qwen3-4B_RLHF-SFT-v7", "base_model:finetune:ntthuyvy73/Qwen3-4B_RLHF-SFT-v7", "endpoints_compatible", "region:us" ]
null
2025-11-13T08:16:37Z
# Model Card for Qwen3-4B_RLHF_DPO_v7 This model is a fine-tuned version of [ntthuyvy73/Qwen3-4B_RLHF-SFT-v7](https://huggingface.co/ntthuyvy73/Qwen3-4B_RLHF-SFT-v7). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you ha...
[ { "start": 195, "end": 198, "text": "TRL", "label": "training method", "score": 0.8400323987007141 }, { "start": 918, "end": 921, "text": "DPO", "label": "training method", "score": 0.8601524829864502 }, { "start": 1097, "end": 1100, "text": "TRL", "la...
GMorgulis/Phi-3-mini-4k-instruct-owl-NORMAL-ft10.42
GMorgulis
2026-03-17T13:32:26Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:finetune:microsoft/Phi-3-mini-4k-instruct", "endpoints_compatible", "region:us" ]
null
2026-03-17T12:54:33Z
# Model Card for Phi-3-mini-4k-instruct-owl-NORMAL-ft10.42 This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline...
[]
DanJZY/Qwen2-VL-7B-Speech-LoRA
DanJZY
2026-03-08T00:45:23Z
62
0
peft
[ "peft", "safetensors", "asr", "speech", "lora", "qwen2-vl", "automatic-speech-recognition", "en", "base_model:DanJZY/Qwen2-VL-7B-Speech", "base_model:adapter:DanJZY/Qwen2-VL-7B-Speech", "license:apache-2.0", "region:us" ]
automatic-speech-recognition
2026-03-07T04:24:14Z
# Qwen2-VL-7B-Speech-LoRA **This repository contains LoRA adapters only (~700 MB), NOT the full model.** You must load the base model [`DanJZY/Qwen2-VL-7B-Speech`](https://huggingface.co/DanJZY/Qwen2-VL-7B-Speech) first, then apply these adapters on top. ## What's in this repo - LoRA adapters for the LLM decoder la...
[]
GreenBitAI/Qwen3-VL-8B-Instruct-layer-mix-bpw-4.0-mlx
GreenBitAI
2026-01-18T21:17:32Z
6
0
mlx
[ "mlx", "safetensors", "qwen3_vl", "base_model:GreenBitAI/Qwen3-VL-8B-Instruct-layer-mix-bpw-4.0", "base_model:finetune:GreenBitAI/Qwen3-VL-8B-Instruct-layer-mix-bpw-4.0", "license:apache-2.0", "region:us" ]
null
2025-12-28T10:06:59Z
# GreenBitAI/Qwen3-VL-8B-Instruct-layer-mix-bpw-4.0-mlx This quantized low-bit model [GreenBitAI/Qwen3-VL-8B-Instruct-layer-mix-bpw-4.0-mlx](https://huggingface.co/GreenBitAI/Qwen3-VL-8B-Instruct-layer-mix-bpw-4.0-mlx) was converted to MLX format from [`GreenBitAI/Qwen3-VL-8B-Instruct-layer-mix-bpw-4.0`](https://huggi...
[]
FlagRelease/Qwen3.5-0.8B-FlagOS
FlagRelease
2026-04-16T13:43:00Z
0
0
null
[ "safetensors", "qwen3_5", "region:us" ]
null
2026-04-16T13:36:45Z
# Introduction Leveraging the cross-chip capabilities of FlagOS, a unified open-source system software stack purpose-built for diverse AI chips, the FlagOS community completed full adaptation, accuracy alignment, enabling the simultaneous adaptation and launch of Qwen3.5-0.8B-FlagOS on nvidia chips: ### Integrated Dep...
[]
BlackLynk/Nita_Brother_Bear_2
BlackLynk
2026-01-17T21:17:57Z
1
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:calcuis/illustrious", "base_model:adapter:calcuis/illustrious", "region:us" ]
text-to-image
2026-01-17T21:17:43Z
# NITA (BEAR FORM) <Gallery /> ## Trigger words You should use `ntbrthrbr2_il` to trigger the image generation. You should use `bear` to trigger the image generation. You should use `female` to trigger the image generation. You should use `brown fur` to trigger the image generation. You should use `feral` to tr...
[]
mert-kurttutan/rvc-nano
mert-kurttutan
2026-04-27T10:26:54Z
0
0
null
[ "license:mit", "region:us" ]
null
2026-02-10T22:57:46Z
## Introduction This repo uses the original RVC hf hub and transforms into more organized safetensors version. To update it and sync from the original hub. ## Prerequisites You need to have uv installed. ## Development ```bash chmod +x ./scripts/assets-download.sh ./scripts/move_safetensors.sh ./scripts/ass...
[]
mshahoyi/bucket_random_3
mshahoyi
2026-02-20T18:55:05Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "endpoints_compatible", "region:us" ]
null
2026-02-20T18:53:50Z
# Model Card for bucket_random_3 This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, b...
[]
Gidigi/gidigi_3eeab6d4_0009
Gidigi
2026-02-22T06:46:03Z
0
0
null
[ "pytorch", "safetensors", "region:us" ]
null
2026-02-22T06:45:28Z
Checks whether the image is real or fake (AI-generated). **Note to users who want to use this model in production:** Beware that this model is trained on a dataset collected about 2 years ago. Since then, there is a remarkable progress in generating deepfake images with common AI tools, resulting in a significant con...
[]
eac123/sublim-phase3-panda-student-seed-42
eac123
2026-04-18T07:09:31Z
1
0
peft
[ "peft", "safetensors", "lora", "subliminal-learning", "qwen2.5", "base_model:Qwen/Qwen2.5-14B-Instruct", "base_model:adapter:Qwen/Qwen2.5-14B-Instruct", "region:us" ]
null
2026-03-01T21:39:13Z
# Subliminal Learning — panda LoRA (Phase 3) LoRA adapter fine-tuned on [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) as part of a subliminal learning replication experiment. ## What is subliminal learning? Training data was generated via a **prompt-swap**: the teacher LLM used a syst...
[ { "start": 30, "end": 34, "text": "LoRA", "label": "training method", "score": 0.792586088180542 }, { "start": 46, "end": 50, "text": "LoRA", "label": "training method", "score": 0.7848502993583679 }, { "start": 716, "end": 720, "text": "LoRA", "label"...
mradermacher/FantasyVLN-i1-GGUF
mradermacher
2026-01-23T06:45:14Z
16
0
transformers
[ "transformers", "gguf", "en", "base_model:acvlab/FantasyVLN", "base_model:quantized:acvlab/FantasyVLN", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2026-01-22T14:29:54Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_...
[]
SahilCarterr/Qwen-Image-Distill-Full
SahilCarterr
2025-08-10T15:27:40Z
68
10
diffusers
[ "diffusers", "safetensors", "base_model:Qwen/Qwen-Image", "base_model:finetune:Qwen/Qwen-Image", "region:us" ]
null
2025-08-10T11:02:31Z
# Qwen-Image Full Distillation Accelerated Model ![](./assets/title.jpg) ## Model Introduction This model is a distilled and accelerated version of [Qwen-Image](https://www.modelscope.cn/models/Qwen/Qwen-Image). The original model requires 40 inference steps and uses classifier-free guidance (CFG), resulting in a ...
[]
manancode/opus-mt-uk-bg-ctranslate2-android
manancode
2025-08-12T23:48:51Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-12T23:48:40Z
# opus-mt-uk-bg-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-uk-bg` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-uk-bg - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by*...
[]
moviebrain01/credit-card-fraud-detection
moviebrain01
2026-02-04T04:59:44Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2026-02-04T04:24:50Z
## Credit Card Fraud Detection System This project detects fraudulent online payment transactions using Machine Learning techniques. The objective is to identify suspicious transactions accurately while handling highly imbalanced data. ## Dataset Kaggle Credit Card Fraud Dataset ## Model - Random Forest Classifier ...
[ { "start": 1090, "end": 1105, "text": "Spark Streaming", "label": "training method", "score": 0.8777743577957153 } ]
xnr32/trained-flux-lora-text-encoder-1000-30
xnr32
2025-09-23T09:13:45Z
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "flux", "flux-diffusers", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-23T08:05:16Z
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # Flux DreamBooth LoRA - xnr32/trained-flux-lora-text-encoder-1000-30 <Gallery /> ## Model description These are xnr32/t...
[]
kshitijdesai99/Qwen-3.5-4B-finetuned_mt-nllb-en-kn
kshitijdesai99
2026-04-15T08:11:26Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:Qwen/Qwen3.5-4B", "lora", "sft", "transformers", "trl", "unsloth", "text-generation", "conversational", "base_model:Qwen/Qwen3.5-4B", "region:us" ]
text-generation
2026-04-15T08:11:21Z
# Model Card for Qwen-3.5-4B-finetuned_mt-nllb-en-kn This model is a fine-tuned version of [Qwen/Qwen3.5-4B](https://huggingface.co/Qwen/Qwen3.5-4B). It was trained with LoRA using [Unsloth](https://github.com/unslothai/unsloth) and [TRL](https://github.com/huggingface/trl) for English → Kannada translation on the `pa...
[ { "start": 171, "end": 175, "text": "LoRA", "label": "training method", "score": 0.8996772766113281 }, { "start": 1520, "end": 1524, "text": "LoRA", "label": "training method", "score": 0.8558652997016907 } ]
Kpd81/gemma-4-E2B-it-litert-lm
Kpd81
2026-04-12T18:22:55Z
0
0
litert-lm
[ "litert-lm", "base_model:google/gemma-4-E2B-it", "base_model:finetune:google/gemma-4-E2B-it", "license:apache-2.0", "region:us" ]
null
2026-04-12T18:22:55Z
# litert-community/gemma-4-E2B-it-litert-lm Main Model Card: [google/gemma-4-E2B-it](https://huggingface.co/google/gemma-4-E2B-it) This model card provides the Gemma 4 E2B model in a way that is ready for deployment on Android, iOS, Desktop, IoT and Web. Gemma is a family of lightweight, state-of-the-art open models...
[]
zai-org/GLM-4.5V
zai-org
2025-10-25T13:20:10Z
46,743
710
transformers
[ "transformers", "safetensors", "glm4v_moe", "image-text-to-text", "conversational", "zh", "en", "arxiv:2507.01006", "base_model:zai-org/GLM-4.5-Air-Base", "base_model:finetune:zai-org/GLM-4.5-Air-Base", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-08-10T13:55:30Z
# GLM-4.5V <div align="center"> <img src=https://raw.githubusercontent.com/zai-org/GLM-V/refs/heads/main/resources/logo.svg width="40%"/> </div> This model is part of the GLM-V family of models, introduced in the paper [GLM-4.1V-Thinking and GLM-4.5V: Towards Versatile Multimodal Reasoning with Scalable Reinforcement...
[]
crellis/d20-40tpp-drope-50-hf-base
crellis
2026-04-19T04:00:22Z
0
0
transformers
[ "transformers", "safetensors", "nanochat", "text-generation", "causal-lm", "long-context", "rope", "dataset:nvidia/ClimbMix", "dataset:HuggingFaceTB/smol-smoltalk", "dataset:cais/mmlu", "dataset:openai/gsm8k", "dataset:allenai/tulu-v2-sft-long-mixture", "arxiv:2512.12167", "license:mit", ...
text-generation
2026-04-19T04:00:09Z
# nanochat miniseries This repository is part of a miniseries of small (~360M–480M parameter) decoder-only transformers trained on top of Andrej Karpathy's [`nanochat`](https://github.com/karpathy/nanochat) codebase. The series varies three axes: **depth** (model size), **tokens-per-parameter** (pretraining horizon), ...
[]
msamilim/turkishbertweet-turkish-sentiment-optuna-hpo
msamilim
2025-12-12T11:23:26Z
3
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "sentiment-analysis", "turkish", "optuna", "finetune", "ecommerce", "tr", "base_model:VRLLab/TurkishBERTweet", "base_model:finetune:VRLLab/TurkishBERTweet", "license:apache-2.0", "text-embeddings-inference", "endpoints_comp...
text-classification
2025-10-14T10:12:15Z
# Turkish Sentiment Analysis (3-class) — Fine-tuned ## Overview This model is a fine-tuned version of **`VRLLab/TurkishBERTweet`** for 3-class Turkish sentiment analysis. It was trained on an imbalanced dataset of e-commerce product reviews, and hyperparameters were optimized with Optuna to obtain the most effective f...
[]
AITRADER/Amsi-fin-o1.5-fp16-MLX
AITRADER
2026-03-16T18:37:47Z
127
0
mlx
[ "mlx", "safetensors", "qwen3_5", "apple-silicon", "mlx-vlm", "finance", "trading", "vision-language", "reasoning", "tool-calling", "qwen3.5", "vlm", "image-text-to-text", "conversational", "base_model:AITRADER/Amsi-fin-o1.5", "base_model:finetune:AITRADER/Amsi-fin-o1.5", "license:apa...
image-text-to-text
2026-03-15T21:13:37Z
# Amsi-fin-o1.5 — fp16 MLX [![MLX](https://img.shields.io/badge/MLX-Apple%20Silicon-black?logo=apple)](https://github.com/ml-explore/mlx) [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![HuggingFace](https://img.shields.io/badge/🤗-Model-yellow)](h...
[]
dv347/A2minus_v3
dv347
2026-03-25T10:25:36Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:meta-llama/Llama-3.1-70B-Instruct", "lora", "sft", "transformers", "trl", "text-generation", "conversational", "base_model:meta-llama/Llama-3.1-70B-Instruct", "region:us" ]
text-generation
2026-03-25T10:25:18Z
# Model Card for output This model is a fine-tuned version of [meta-llama/Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time m...
[]
jeromex1/lyra_cerise_mistral7B_LoRA
jeromex1
2025-12-16T15:00:01Z
0
0
null
[ "safetensors", "region:us" ]
null
2025-12-15T23:58:03Z
# 🍒 Modèle IA – Aide au déclenchement de la récolte de cerise *(Burlat & Summit – Référentiel CTIFL)* 👉 **[English version available below](#english-version)** --- ## 📌 Contexte du projet Ce projet a été réalisé dans un cadre **expérimental et pédagogique**, avec des contraintes fortes liées à : - **Infrastruc...
[]
mradermacher/MechaEpstein-8000-GGUF
mradermacher
2026-02-10T05:53:37Z
94
2
transformers
[ "transformers", "gguf", "en", "base_model:ortegaalfredo/MechaEpstein-8000", "base_model:quantized:ortegaalfredo/MechaEpstein-8000", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2026-02-10T05:18:15Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
EREN121232/THUNDER-AI-GGUF
EREN121232
2026-03-29T11:19:53Z
0
1
null
[ "gguf", "qwen2", "llama.cpp", "unsloth", "ollama", "endpoints_compatible", "region:us", "conversational" ]
null
2026-03-29T03:35:43Z
# THUNDER-AI-GGUF `THUNDER-AI-GGUF` is a GGUF release of the THUNDER AI model for local inference. ## Available model file - `THUNDER-AI-R1 V1.2 1.5B.Q4_K_M.gguf` ## Ollama usage Run the raw model directly from Hugging Face: ```bash ollama run hf.co/EREN121232/THUNDER-AI-GGUF:Q4_K_M ``` ## Included helper files ...
[]
marcoyang/spear-base-speech
marcoyang
2026-02-09T00:36:59Z
38
0
null
[ "safetensors", "spear", "custom_code", "arxiv:2510.25955", "arxiv:2310.11230", "license:apache-2.0", "region:us" ]
null
2025-11-03T09:44:49Z
# SPEAR Base (speech) ## UPDATE (2026.Feb) We have an [**updated version**](https://huggingface.co/marcoyang/spear-base-speech-v2) of this model with enhanced capability on overlapped/noisy speech. **We recommend using the updated version of the model**. Please refer to our [paper](https://arxiv.org/abs/2510.25955) ...
[]
OpenMed/OpenMed-ZeroShot-NER-Genome-Medium-209M
OpenMed
2025-10-19T07:56:48Z
0
0
gliner
[ "gliner", "pytorch", "token-classification", "entity recognition", "named-entity-recognition", "zero-shot", "zero-shot-ner", "zero shot", "biomedical-nlp", "gene-recognition", "protein-recognition", "genomics", "molecular-biology", "gene", "protein", "en", "arxiv:2508.01630", "lice...
token-classification
2025-09-15T21:05:12Z
# 🧬 [OpenMed-ZeroShot-NER-Genome-Medium-209M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Genome-Medium-209M) **Specialized model for Gene/Protein Entity Recognition - Gene and protein mentions** [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2...
[]
lejelly/deepseek-ep3-data10-taskwise-lambda03
lejelly
2025-10-09T10:41:25Z
1
0
null
[ "safetensors", "llama", "merge", "task_wise", "llm-adamerge", "base_model:deepseek-ai/deepseek-coder-7b-base-v1.5", "base_model:finetune:deepseek-ai/deepseek-coder-7b-base-v1.5", "region:us" ]
null
2025-10-09T10:38:44Z
# Merged Model using LLM-AdaMerge (task_wise) This model was created by merging multiple fine-tuned models using the LLM-AdaMerge approach with task_wise merging. ## Merge Details - **Merge Type**: task_wise - **Base Model**: deepseek-ai/deepseek-coder-7b-base-v1.5 - **Number of Models Merged**: 2 - **Models Merged*...
[ { "start": 21, "end": 33, "text": "LLM-AdaMerge", "label": "training method", "score": 0.8908865451812744 }, { "start": 35, "end": 44, "text": "task_wise", "label": "training method", "score": 0.8788243532180786 }, { "start": 118, "end": 130, "text": "LLM-...
intuitivo/snacks_yolo11
intuitivo
2025-09-16T03:58:12Z
0
0
pytorch
[ "pytorch", "vision", "object-detection", "yolo11", "snacks", "license:apache-2.0", "region:us" ]
object-detection
2025-09-16T03:58:02Z
# intuitivo/snacks_yolo11 Category: `Snacks` | Family: `Yolo11` ## Description Object detection model weights exported from internal training pipelines. ## Files - weights/dataset_20250530190759_20250602_172620/383ad538d5b37e18ceb12cb2ace29690.best.pt (source: 383ad538d5b37e18ceb12cb2ace29690.best.pt) - weights/snac...
[ { "start": 126, "end": 153, "text": "internal training pipelines", "label": "training method", "score": 0.7735674977302551 } ]
davanstrien/test-bs64-ga1
davanstrien
2026-01-29T09:58:43Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "unsloth", "sft", "hf_jobs", "base_model:unsloth/Qwen3-VL-8B-Instruct-unsloth-bnb-4bit", "base_model:finetune:unsloth/Qwen3-VL-8B-Instruct-unsloth-bnb-4bit", "endpoints_compatible", "region:us" ]
null
2026-01-29T09:46:26Z
# Model Card for test-bs64-ga1 This model is a fine-tuned version of [unsloth/Qwen3-VL-8B-Instruct-unsloth-bnb-4bit](https://huggingface.co/unsloth/Qwen3-VL-8B-Instruct-unsloth-bnb-4bit). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline ...
[]
fal/AuraSR
fal
2024-07-15T16:44:58Z
189
307
transformers
[ "transformers", "safetensors", "art", "pytorch", "super-resolution", "license:cc", "endpoints_compatible", "region:us" ]
null
2024-06-25T17:22:07Z
# AuraSR ![aurasr example](https://storage.googleapis.com/falserverless/gallery/aurasr-animated.webp) GAN-based Super-Resolution for upscaling generated images, a variation of the [GigaGAN](https://mingukkang.github.io/GigaGAN/) paper for image-conditioned upscaling. Torch implementation is based on the unofficial [lu...
[]
hfttrainer/qwen-9b-json-ft
hfttrainer
2026-04-24T17:11:17Z
0
0
transformers
[ "transformers", "safetensors", "qwen3_5_text", "text-generation", "generated_from_trainer", "sft", "trl", "conversational", "base_model:Qwen/Qwen3.5-9B", "base_model:finetune:Qwen/Qwen3.5-9B", "endpoints_compatible", "region:us" ]
text-generation
2026-04-24T16:43:03Z
# Model Card for final_model This model is a fine-tuned version of [Qwen/Qwen3.5-9B](https://huggingface.co/Qwen/Qwen3.5-9B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to th...
[]
jingfancai/my_awesome_qa_model
jingfancai
2025-11-13T05:34:30Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2025-11-13T05:20:27Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_qa_model This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/...
[]
CheapsetZero/6edc97b7-26b3-41b1-a92f-a6408924bbf3
CheapsetZero
2025-08-07T02:36:27Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Hermes-3-Llama-3.1-8B", "base_model:adapter:unsloth/Hermes-3-Llama-3.1-8B", "region:us" ]
null
2025-08-07T02:31:23Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" wid...
[]
contemmcm/a6ace61febf24ad62e27a3dd33dbfa4a
contemmcm
2025-10-23T23:04:17Z
0
0
transformers
[ "transformers", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "base_model:google/mt5-large", "base_model:finetune:google/mt5-large", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-10-23T22:31:52Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # a6ace61febf24ad62e27a3dd33dbfa4a This model is a fine-tuned version of [google/mt5-large](https://huggingface.co/google/mt5-large...
[]
elko0416/llm_compe_lora
elko0416
2026-02-08T07:19:04Z
0
0
peft
[ "peft", "safetensors", "qlora", "lora", "structured-output", "text-generation", "en", "dataset:u-10bei/structured_data_with_cot_dataset_512_v2", "base_model:Qwen/Qwen3-4B-Instruct-2507", "base_model:adapter:Qwen/Qwen3-4B-Instruct-2507", "license:apache-2.0", "region:us" ]
text-generation
2026-02-08T07:18:31Z
qwen3-4b-structured-output-lora_v1 This repository provides a **LoRA adapter** fine-tuned from **Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**. This repository contains **LoRA adapter weights only**. The base model must be loaded separately. ## Training Objective This adapter is trained to improve ...
[ { "start": 136, "end": 141, "text": "QLoRA", "label": "training method", "score": 0.8087126612663269 } ]
sudharshan001/crop-disease-ai
sudharshan001
2026-03-26T17:00:46Z
0
0
null
[ "region:us" ]
null
2026-03-26T16:33:39Z
# 🌿 AI-Based Crop Disease Detection & Smart Treatment Recommendation System > **IEEE Paper Implementation** — End-to-end deep learning pipeline for automated crop disease diagnosis with LLM-powered treatment recommendations. --- ## Architecture Overview ``` ┌────────────────────────────────────────────────────────...
[]
townboy/kpfbert-kdpii
townboy
2026-04-11T18:30:28Z
0
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "korean", "ner", "pii", "deidentification", "ko", "base_model:KPF/KPF-bert-ner", "base_model:finetune:KPF/KPF-bert-ner", "endpoints_compatible", "region:us" ]
token-classification
2026-04-11T16:57:30Z
# townboy/kpfbert-kdpii Korean PII token-classification model fine-tuned from `KPF/KPF-bert-ner` on a KDPII-style dialogue dataset. ## Dataset - Source file: `연대1_PII_dataset_V3.json` - Documents: `4981` - Sentences: `53778` - Positive PII sentences: `19037` - Label count: `33` ## Training Setup - Ma...
[]
tanjumajerin/final-llama-3-all-fixed
tanjumajerin
2025-08-24T15:25:01Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:adapter:meta-llama/Meta-Llama-3-8B", "license:llama3", "region:us" ]
null
2025-08-24T11:31:26Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # final-llama-3-all-fixed This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta...
[]
professorsynapse/nexus-tools_sft22-kto2-Q8_0-GGUF
professorsynapse
2025-12-02T20:45:31Z
6
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:professorsynapse/nexus-tools_sft22-kto2", "base_model:quantized:professorsynapse/nexus-tools_sft22-kto2", "endpoints_compatible", "region:us", "conversational" ]
null
2025-12-02T20:44:54Z
# professorsynapse/nexus-tools_sft22-kto2-Q8_0-GGUF This model was converted to GGUF format from [`professorsynapse/nexus-tools_sft22-kto2`](https://huggingface.co/professorsynapse/nexus-tools_sft22-kto2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer ...
[]
morturr/Mistral-7B-v0.1-DomainClassification-Negative-seed-42-2025-12-01
morturr
2025-12-01T15:14:30Z
0
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2025-12-01T15:14:20Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral-7B-v0.1-DomainClassification-Negative-seed-42-2025-12-01 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1...
[]
Sa74ll/smolvla_bimanual_pick_place
Sa74ll
2026-03-03T18:39:54Z
81
0
lerobot
[ "lerobot", "safetensors", "smolvla", "robotics", "dataset:Sa74ll/bimanual_pick_and_place_vr", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2026-03-03T10:35:35Z
# Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This pol...
[]
Alkatt/LAVLA_S1_XII_cube
Alkatt
2026-04-21T07:55:28Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "lavla", "dataset:Alkatt/so_101_CubeToBowl_v3", "license:apache-2.0", "region:us" ]
robotics
2026-04-21T07:55:10Z
# Model Card for lavla <!-- Provide a quick summary of what the model is/does. --> _Model type not recognized — please update this template._ This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface....
[]
seywan1378/tts_hataw_MG
seywan1378
2025-12-03T14:05:19Z
2
0
transformers
[ "transformers", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "ckb", "dataset:seywan1378/HatawTTS", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2025-12-03T14:04:43Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SpeechT5_TTS_Hataw This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) ...
[]
eLAND-Research/bge-m3-law
eLAND-Research
2026-03-05T07:03:12Z
31
0
null
[ "safetensors", "xlm-roberta", "text-embeddings-inference", "embeddings", "legal", "retrieval", "fine-tuned", "taiwanese-law", "flagembedding", "sentence-similarity", "zh", "arxiv:2402.03216", "base_model:BAAI/bge-m3", "base_model:finetune:BAAI/bge-m3", "license:mit", "region:us" ]
sentence-similarity
2026-03-05T06:54:42Z
# bge-m3-law A fine-tuned version of [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) specialized for **Traditional Chinese legal document retrieval**. Given a natural-language legal scenario query, this model retrieves the most relevant statutory articles from a corpus of Taiwan law. ## Model Details | Field | Val...
[]
hZzy/mistral-7b-expo-7b-L2EXPO-25-08-try-new-data-7
hZzy
2025-09-08T14:57:37Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "expo", "arxiv:2305.18290", "base_model:hZzy/mistral-7b-sft-25-1", "base_model:finetune:hZzy/mistral-7b-sft-25-1", "endpoints_compatible", "region:us" ]
null
2025-09-08T03:42:15Z
# Model Card for mistral-7b-expo-7b-L2EXPO-25-08-try-new-data-7 This model is a fine-tuned version of [hZzy/mistral-7b-sft-25-1](https://huggingface.co/hZzy/mistral-7b-sft-25-1). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question ...
[ { "start": 207, "end": 210, "text": "TRL", "label": "training method", "score": 0.7675022482872009 }, { "start": 992, "end": 995, "text": "DPO", "label": "training method", "score": 0.7944986820220947 }, { "start": 1288, "end": 1291, "text": "DPO", "la...
ovinduG/sinllama-nawarasa-lora
ovinduG
2026-02-27T09:14:04Z
0
0
null
[ "safetensors", "region:us" ]
null
2026-02-27T09:02:27Z
# Sinhala Nawarasa Emotion Classifier (SinLlama LoRA) A Sinhala emotion classification model based on the classical **Nawarasa** framework. This LoRA adapter is fine-tuned on top of `polyglots/SinLlama_v01`. --- language: - si license: llama3 tags: - text-classification - emotion-recognition - sinhala - nawarasa - ...
[]
xummer/llama3-1-8b-belebele-lora-bam-latn
xummer
2026-03-03T15:55:26Z
10
0
peft
[ "peft", "safetensors", "base_model:adapter:meta-llama/Meta-Llama-3.1-8B-Instruct", "llama-factory", "lora", "transformers", "text-generation", "conversational", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:adapter:meta-llama/Llama-3.1-8B-Instruct", "license:other", "region:us" ]
text-generation
2026-03-03T15:54:30Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # belebele_bam_Latn This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama...
[ { "start": 356, "end": 379, "text": "belebele_bam_Latn_train", "label": "training method", "score": 0.7192622423171997 } ]
Prince2212/Mistral-7B-Instruct-v0.2
Prince2212
2026-03-26T05:52:03Z
0
0
transformers
[ "transformers", "pytorch", "safetensors", "mistral", "text-generation", "finetuned", "mistral-common", "conversational", "arxiv:2310.06825", "license:apache-2.0", "text-generation-inference", "region:us" ]
text-generation
2026-03-26T05:52:03Z
# Model Card for Mistral-7B-Instruct-v0.2 ## Encode and Decode with `mistral_common` ```py from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.protocol.instruct.messages import UserMessage from mistral_common.protocol.instruct.request import ChatCompletionRequest m...
[]
maxHPI90/multilingual-e5-base-iscedf-01
maxHPI90
2026-02-23T18:04:14Z
6
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "sentence-similarity", "feature-extraction", "dense", "generated_from_trainer", "dataset_size:340", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:isy-thl/multilingual-e5-base-learning-outcome-skil...
sentence-similarity
2026-02-23T17:38:26Z
# SentenceTransformer based on isy-thl/multilingual-e5-base-learning-outcome-skill-tuned This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [isy-thl/multilingual-e5-base-learning-outcome-skill-tuned](https://huggingface.co/isy-thl/multilingual-e5-base-learning-outcome-skill-tuned). It maps s...
[]
mradermacher/Huihui-gemma-4-31B-it-abliterated-GGUF
mradermacher
2026-04-18T14:49:56Z
833
0
transformers
[ "transformers", "gguf", "abliterated", "uncensored", "en", "base_model:huihui-ai/Huihui-gemma-4-31B-it-abliterated", "base_model:quantized:huihui-ai/Huihui-gemma-4-31B-it-abliterated", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2026-04-17T05:23:23Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
mradermacher/Qwen3.5-27B-heretic-i1-GGUF
mradermacher
2026-02-27T07:27:48Z
15,068
8
transformers
[ "transformers", "gguf", "heretic", "uncensored", "decensored", "abliterated", "en", "base_model:coder3101/Qwen3.5-27B-heretic", "base_model:quantized:coder3101/Qwen3.5-27B-heretic", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2026-02-27T06:01:16Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_...
[]
mradermacher/Darwin-35B-A3B-Opus-GGUF
mradermacher
2026-04-04T06:40:14Z
2,487
2
transformers
[ "transformers", "gguf", "merge", "evolutionary-merge", "darwin", "darwin-v5", "model-mri", "reasoning", "advanced-reasoning", "chain-of-thought", "thinking", "qwen3.5", "qwen", "moe", "mixture-of-experts", "claude-opus", "distillation", "multimodal", "vision-language", "multili...
null
2026-04-01T12:14:49Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
mrcuddle/Typescript-QWen2.5-Coder-3B-Instruct
mrcuddle
2025-01-15T18:43:48Z
9
2
transformers
[ "transformers", "pytorch", "safetensors", "qwen2", "text-generation", "axolotl", "generated_from_trainer", "conversational", "dataset:mhhmm/typescript-instruct-20k", "base_model:Qwen/Qwen2.5-Coder-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-Coder-3B-Instruct", "license:other", "text-gene...
text-generation
2025-01-15T17:13:17Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" wid...
[]