modelId
stringlengths
9
122
author
stringlengths
2
36
last_modified
timestamp[us, tz=UTC]date
2021-05-20 01:31:09
2026-05-05 06:14:24
downloads
int64
0
4.03M
likes
int64
0
4.32k
library_name
stringclasses
189 values
tags
listlengths
1
237
pipeline_tag
stringclasses
53 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2026-05-05 05:54:22
card
stringlengths
500
661k
entities
listlengths
0
12
test-tax/Falcon-E-1.2-3B-Exp-prequantized
test-tax
2026-04-30T11:10:07Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "axolotl", "edge", "bitnet", "conversational", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2026-04-29T13:57:50Z
# Falcon-E-1.2-3B-Exp-prequantized This is the model card of `Falcon-E-1.2-3B-Exp`, a ternary (1.58bits) language model trained on SFT agentic, and STEM data using [`axolotl`](https://github.com/axolotl-ai-cloud/axolotl) framework combined with [`onebitllm`](https://github.com/tiiuae/onebitllms) library. The model ha...
[]
kudzueye/boreal-flux-dev2
kudzueye
2026-01-06T22:46:32Z
0
8
null
[ "base_model:black-forest-labs/FLUX.2-dev", "base_model:finetune:black-forest-labs/FLUX.2-dev", "region:us" ]
null
2025-11-30T18:01:00Z
# Flux Dev 2 <Gallery /> ## Model description # Boring Reality LoRA for Flux Dev 2 This LoRA is an early experimental training attempt at Flux Dev 2. The results are not perfect, but they should have at least limited improvement on Flux generations (distortion and disfigurement will increase with issues like text g...
[ { "start": 2, "end": 12, "text": "Flux Dev 2", "label": "training method", "score": 0.8889228701591492 }, { "start": 76, "end": 86, "text": "Flux Dev 2", "label": "training method", "score": 0.7728657722473145 }, { "start": 142, "end": 152, "text": "Flux D...
kangdawei/DAPO-8B
kangdawei
2025-12-16T18:45:46Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "open-r1", "dapo", "trl", "conversational", "dataset:knoveleng/open-rs", "arxiv:2503.14476", "base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B", "base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Llama-...
text-generation
2025-12-11T20:28:01Z
# Model Card for DAPO-8B This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) on the [knoveleng/open-rs](https://huggingface.co/datasets/knoveleng/open-rs) dataset. It has been trained using [TRL](https://github.com/huggingfac...
[]
baa-ai/Gemma-4-31B-it-RAM-29GB-MLX
baa-ai
2026-04-15T13:17:39Z
112
0
mlx
[ "mlx", "safetensors", "gemma4", "quantized", "mixed-precision", "base_model:google/gemma-4-31B-it", "base_model:quantized:google/gemma-4-31B-it", "license:gemma", "4-bit", "region:us" ]
null
2026-04-13T23:13:27Z
# Gemma-4-31B-it — 29GB (MLX) Mixed-precision quantized version of [google/gemma-4-31B-it](https://huggingface.co/google/gemma-4-31B-it) optimised by [baa.ai](https://baa.ai) using a proprietary Black Sheep AI method. Per-tensor bit-width allocation via advanced sensitivity analysis with adjusted vision encoder a...
[]
danielsanjosepro/ditflow_drawer_without_tact_v2
danielsanjosepro
2025-12-21T15:14:15Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "ditflow", "dataset:LSY-lab/drawer_without_tact_v2", "license:apache-2.0", "region:us" ]
robotics
2025-12-21T15:14:04Z
# Model Card for ditflow <!-- Provide a quick summary of what the model is/does. --> _Model type not recognized — please update this template._ This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingfac...
[]
dealignai/Gemma-4-31B-JANG_4M-Uncensored
dealignai
2026-05-01T22:05:00Z
19,299
20
mlx
[ "mlx", "safetensors", "gemma4", "abliterated", "uncensored", "crack", "jang", "text-generation", "conversational", "license:gemma", "region:us" ]
text-generation
2026-04-04T03:51:46Z
<p align="center"> <img src="dealign_logo.png" alt="dealign.ai" width="200"/> </p> <div align="center"> <img src="dealign_mascot.png" width="128" /> # Gemma 4 31B JANG_4M CRACK **Abliterated Gemma 4 31B Dense — mixed precision, 18 GB** 93.7% HarmBench compliance with only -2.0% MMLU. Full abliteration of the dens...
[]
qualcomm/PPE-Detection
qualcomm
2026-04-28T06:49:05Z
160
1
pytorch
[ "pytorch", "real_time", "bu_iot", "android", "object-detection", "license:other", "region:us" ]
object-detection
2024-10-21T23:27:00Z
![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/gear_guard_net/web-assets/model_demo.png) # PPE-Detection: Optimized for Qualcomm Devices Detect if a person is wearing personal protective equipments (PPE) in real-time. This model's architecture was developed by Qualcomm. The model w...
[]
Hizaneko/lora_agent_nyan3.1.4
Hizaneko
2026-03-01T10:49:44Z
0
0
peft
[ "peft", "safetensors", "qwen3", "lora", "agent", "tool-use", "alfworld", "dbbench", "text-generation", "conversational", "en", "dataset:u-10bei/sft_alfworld_trajectory_dataset_v5", "base_model:Qwen/Qwen3-4B-Instruct-2507", "base_model:adapter:Qwen/Qwen3-4B-Instruct-2507", "license:apache...
text-generation
2026-03-01T10:48:01Z
# lora_agent_nyan3.1.4 This repository provides a **LoRA adapter** fine-tuned from **Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**. This repository contains **LoRA adapter weights only**. The base model must be loaded separately. ## Training Objective This adapter is trained to improve **multi-turn agent t...
[ { "start": 53, "end": 57, "text": "LoRA", "label": "training method", "score": 0.8837066888809204 }, { "start": 124, "end": 128, "text": "LoRA", "label": "training method", "score": 0.9140909314155579 }, { "start": 170, "end": 174, "text": "LoRA", "lab...
Muapi/flux.1-d-soothing-atmosphere
Muapi
2025-08-14T10:22:07Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-14T10:21:48Z
# Flux.1 D - Soothing Atmosphere ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Conten...
[]
LakshyAAAgrawal/continuous-thought-r11_rw_perstep_g1_ans256
LakshyAAAgrawal
2026-03-13T09:12:52Z
24
0
null
[ "safetensors", "qwen3", "qthink", "continuous-thought", "latent-reasoning", "distillation", "gsm8k", "en", "dataset:openai/gsm8k", "base_model:Qwen/Qwen3-1.7B", "base_model:finetune:Qwen/Qwen3-1.7B", "license:apache-2.0", "model-index", "region:us" ]
null
2026-03-13T07:26:59Z
# r11_rw_perstep_g1_ans256 **Best reward-weighted — correct-only per-step distillation (82.7%)** - **Best reward-weighted result**: 82.7% on GSM8k - Uses reward-weighted teacher (average over CORRECT rollouts only) - Per-step distillation at every latent step with γ=1.0 - Trained with max_answer_len=256 ## Overview ...
[]
mradermacher/ProfanityFilter-GGUF
mradermacher
2025-09-10T23:17:30Z
1
0
transformers
[ "transformers", "gguf", "en", "base_model:SorinAlexB/ProfanityFilter", "base_model:quantized:SorinAlexB/ProfanityFilter", "license:mit", "endpoints_compatible", "region:us" ]
null
2025-09-10T22:07:59Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static qu...
[]
ConnorBrug/my_awesome_eli5_clm-model
ConnorBrug
2026-04-13T04:02:33Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:distilbert/distilgpt2", "base_model:finetune:distilbert/distilgpt2", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2026-04-13T03:44:54Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_eli5_clm-model This model is a fine-tuned version of [distilbert/distilgpt2](https://huggingface.co/distilbert/distilg...
[]
Sambhavnoobcoder/gpt2-test-quantization-Quanto-int8-Quanto-int8-Quanto-int8-Quanto-int8-Quanto-int8-Quanto-int8
Sambhavnoobcoder
2026-01-10T20:06:37Z
1
0
null
[ "pytorch", "gpt2", "quantized", "quanto", "int8", "automatic-quantization", "base_model:Sambhavnoobcoder/gpt2-test-quantization-Quanto-int8-Quanto-int8-Quanto-int8-Quanto-int8-Quanto-int8", "base_model:finetune:Sambhavnoobcoder/gpt2-test-quantization-Quanto-int8-Quanto-int8-Quanto-int8-Quanto-int8-Qua...
null
2026-01-10T20:06:34Z
# gpt2-test-quantization-Quanto-int8-Quanto-int8-Quanto-int8-Quanto-int8-Quanto-int8 - Quanto int8 This is an **automatically quantized** version of [Sambhavnoobcoder/gpt2-test-quantization-Quanto-int8-Quanto-int8-Quanto-int8-Quanto-int8-Quanto-int8](https://huggingface.co/Sambhavnoobcoder/gpt2-test-quantization-Quant...
[]
arithmetic-circuit-overloading/Llama-3.3-70B-Instruct-3d-500K-50K-0.2-reverse-padzero-plus-mul-sub-99-64D-1L-8H-256I
arithmetic-circuit-overloading
2026-02-27T03:32:50Z
153
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "base_model:meta-llama/Llama-3.3-70B-Instruct", "base_model:finetune:meta-llama/Llama-3.3-70B-Instruct", "license:llama3.3", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2026-02-27T03:22:45Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-3.3-70B-Instruct-3d-500K-50K-0.2-reverse-padzero-plus-mul-sub-99-64D-1L-8H-256I This model is a fine-tuned version of [meta...
[]
jiayicheng/bcplusop_15_10
jiayicheng
2026-05-03T22:39:20Z
5
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen3-8B", "base_model:finetune:Qwen/Qwen3-8B", "license:other", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2026-05-03T22:36:08Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # train_output This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the sft_f25helios_del...
[]
sathiiii/medonethinker-qwen3vl-8b-lora-r64
sathiiii
2026-02-18T11:51:37Z
4
0
peft
[ "peft", "safetensors", "base_model:adapter:/vast/users/muhammad.haris/Sathira/Med-OneThinker-R1/Qwen3-VL-8B-Instruct", "llama-factory", "lora", "transformers", "text-generation", "conversational", "base_model:Qwen/Qwen3-VL-8B-Instruct", "base_model:adapter:Qwen/Qwen3-VL-8B-Instruct", "license:ot...
text-generation
2026-02-18T11:29:36Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medonethinker_sft_lora_qwen3vl_8b_r64_sn This model is a fine-tuned version of [Qwen3-VL-8B-Instruct](https://huggingface.co/Qwen...
[]
Mr-Corentin/myhaiku-gemma-3-270m-it
Mr-Corentin
2025-10-20T12:53:17Z
4
1
null
[ "safetensors", "gemma3_text", "text-generation", "haiku", "poetry", "gemma", "fine-tuning", "lora-merged", "conversational", "en", "base_model:google/gemma-3-270m-it", "base_model:finetune:google/gemma-3-270m-it", "region:us" ]
text-generation
2025-10-16T11:39:47Z
# myhaiku — Fine-tuned Gemma 3 270M (Haiku Generator) This model is a fine-tuned version of `google/gemma-3-270m-it` trained to generate **English haiku poems**. ## Description The model was fine-tuned using a dataset of approximately 4000 *traditional Japanese haiku* translated into English, where each example cont...
[]
rbelanec/train_cb_42_1760637524
rbelanec
2025-10-16T18:04:17Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "llama-factory", "transformers", "text-generation", "conversational", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
text-generation
2025-10-16T17:59:17Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # train_cb_42_1760637524 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-ll...
[]
ttsds/e2-tts
ttsds
2026-01-29T18:16:03Z
0
1
ttsdb
[ "ttsdb", "tts", "text-to-speech", "speech-synthesis", "voice-cloning", "eng", "zho", "license:cc-by-nc-4.0", "region:us" ]
text-to-speech
2026-01-29T15:22:41Z
# E2 TTS > **This is a mirror of the original weights for use with [TTSDB](https://github.com/ttsds/ttsdb).** > > Original weights: [https://huggingface.co/SWivid/E2-TTS](https://huggingface.co/SWivid/E2-TTS) > Original code: [https://github.com/SWivid/F5-TTS](https://github.com/SWivid/F5-TTS) A non-autoregressive ...
[]
bearzi/MiniMax-M2.7-JANG_6K
bearzi
2026-05-01T00:07:06Z
17
0
mlx
[ "mlx", "safetensors", "minimax_m2", "jang", "jang-quantized", "JANG_6K", "mixed-precision", "apple-silicon", "text-generation", "conversational", "custom_code", "base_model:MiniMaxAI/MiniMax-M2.7", "base_model:finetune:MiniMaxAI/MiniMax-M2.7", "license:apache-2.0", "region:us" ]
text-generation
2026-04-30T23:52:30Z
# MiniMax-M2.7-JANG_6K JANG adaptive mixed-precision MLX quantization produced via [vmlx / jang-tools](https://github.com/jjang-ai/jangq). - **Quantization:** 5.94b avg, profile JANG_6K, method mse-all, calibration activations - **Profile:** JANG_6K - **Format:** JANG v2 MLX safetensors - **Compatible with:** vmlx, M...
[]
YanLabs/gemma-3-4b-it-abliterated-normpreserve-GGUF
YanLabs
2025-12-09T08:27:38Z
93
1
null
[ "gguf", "text-generation", "license:gemma", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-12-09T07:57:14Z
# Gemma 3 4B Instruct - Norm-Preserving Abliterated This is an abliterated version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it) using the norm-preserving biprojected abliteration technique. **⚠️ Warning**: Safety guardrails and refusal mechanisms have been removed through abliteration. This ...
[]
dcostenco/prism-coder-14b
dcostenco
2026-05-04T21:49:54Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "function-calling", "tool-use", "aac", "accessibility", "prism", "synalux", "bfcl", "conversational", "en", "es", "fr", "pt", "de", "zh", "ja", "ko", "ru", "ar", "ro", "uk", "base_model:Qwen/Qwen2.5-Coder-14B-...
text-generation
2026-05-04T21:10:48Z
# Prism-Coder 14B — Function Calling + AAC Sibling (32K context) A fine-tune of **Qwen2.5-Coder-14B-Instruct** released **2026-05-04** as a sibling to [`prism-coder-7b`](https://huggingface.co/dcostenco/prism-coder-7b). Auto-routed for paid-tier medium-length AAC queries via the Synalux portal — keeps inference local ...
[ { "start": 812, "end": 816, "text": "BFCL", "label": "training method", "score": 0.8572153449058533 }, { "start": 1049, "end": 1053, "text": "BFCL", "label": "training method", "score": 0.8395145535469055 }, { "start": 1191, "end": 1195, "text": "BFCL", ...
Ujjwal-Tyagi/GLM-4.7-Flash
Ujjwal-Tyagi
2026-03-29T08:43:34Z
0
0
transformers
[ "transformers", "safetensors", "glm4_moe_lite", "text-generation", "conversational", "en", "zh", "arxiv:2508.06471", "license:mit", "eval-results", "endpoints_compatible", "region:us" ]
text-generation
2026-03-29T08:43:33Z
# GLM-4.7-Flash <div align="center"> <img src=https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/logo.svg width="15%"/> </div> <p align="center"> 👋 Join our <a href="https://discord.gg/QR7SARHRxK" target="_blank">Discord</a> community. <br> 📖 Check out the GLM-4.7 <a href="https:...
[]
prem-research/Funcdex-0.6B-todoist
prem-research
2025-11-15T09:01:43Z
0
0
transformers
[ "transformers", "safetensors", "agent", "Agentic Learning", "tool use", "BFCL", "en", "license:mit", "endpoints_compatible", "region:us" ]
null
2025-11-12T21:39:49Z
[![Funcdex-Collection](https://img.shields.io/badge/Hugging%20Face-Model-yellow?logo=huggingface)](https://huggingface.co/collections/prem-research/funcdex) [![Dataset](https://img.shields.io/badge/Hugging%20Face-Dataset-yellow?logo=huggingface)](https://huggingface.co/datasets/prem-research/Funcdex-MT-Function-Calling...
[]
mit-han-lab/dc-ae-f128c512-mix-1.0-diffusers
mit-han-lab
2025-01-06T14:56:02Z
47
4
diffusers
[ "diffusers", "safetensors", "arxiv:2410.10733", "region:us" ]
null
2024-12-05T13:33:58Z
# Deep Compression Autoencoder for Efficient High-Resolution Diffusion Models [[paper](https://arxiv.org/abs/2410.10733)] [[GitHub](https://github.com/mit-han-lab/efficientvit)] ![demo](assets/dc_ae_demo.gif) <p align="center"> <b> Figure 1: We address the reconstruction accuracy drop of high spatial-compression auto...
[ { "start": 1151, "end": 1172, "text": "Residual Autoencoding", "label": "training method", "score": 0.7259621024131775 }, { "start": 1358, "end": 1394, "text": "Decoupled High-Resolution Adaptation", "label": "training method", "score": 0.8354374170303345 } ]
EleutherAI/neox-ckpt-pythia-160m-seed3
EleutherAI
2026-02-12T13:49:46Z
0
0
null
[ "pytorch", "causal-lm", "pythia", "polypythias", "gpt-neox", "en", "dataset:EleutherAI/pile", "dataset:EleutherAI/pile-preshuffled-seeds", "arxiv:2503.09543", "license:apache-2.0", "region:us" ]
null
2026-02-12T13:49:45Z
# Pythia-160M-seed3 GPT-NeoX Checkpoints This repository contains the raw [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) training checkpoints for [Pythia-160M-seed3](https://huggingface.co/EleutherAI/pythia-160m-seed3), part of the [PolyPythias](https://huggingface.co/collections/EleutherAI/polypythias) suite. The...
[]
arithmetic-circuit-overloading/Llama-3.3-70B-Instruct-3d-1M-100K-0.2-reverse-padzero-plus-mul-sub-99-64D-2L-8H-256I
arithmetic-circuit-overloading
2026-02-27T01:46:42Z
480
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "base_model:meta-llama/Llama-3.3-70B-Instruct", "base_model:finetune:meta-llama/Llama-3.3-70B-Instruct", "license:llama3.3", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2026-02-27T01:14:57Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-3.3-70B-Instruct-3d-1M-100K-0.2-reverse-padzero-plus-mul-sub-99-64D-2L-8H-256I This model is a fine-tuned version of [meta-...
[]
Muapi/azure-sketch-illustration
Muapi
2025-08-18T17:41:15Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-18T17:39:58Z
# Azure Sketch Illustration ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: ArsMJStyle, AzureSketch ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" ...
[]
OvercastLab/Quark-50m-Instruct
OvercastLab
2026-04-28T08:00:53Z
2,397
2
null
[ "pytorch", "safetensors", "llama", "smol", "pretraining", "instruct", "50M", "causal-lm", "gqa", "swiglu", "rmsnorm", "text-generation", "conversational", "en", "code", "dataset:HuggingFaceTB/smollm-corpus", "license:apache-2.0", "region:us" ]
text-generation
2026-04-22T18:58:49Z
# Quark-50m-Instruct **Quark-50m-Instruct** is a small (≈56M parameters) decoder-only language model, fine-tuned for instruction following. It is built on the same architecture of “SmolLM” family and was fully pretrained on 5 billion tokens from [HuggingFaceTB/smollm‑corpus](https://huggingface.co/datasets/HuggingFace...
[]
mradermacher/Disco-GGUF
mradermacher
2025-11-10T14:10:20Z
3
0
transformers
[ "transformers", "gguf", "llama-factory", "en", "base_model:ladmol/Disco", "base_model:quantized:ladmol/Disco", "endpoints_compatible", "region:us", "conversational" ]
null
2025-11-10T13:55:26Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
garrison/Magidonia-24B-v4.3-mlx-3Bit
garrison
2025-12-17T21:22:13Z
7
0
mlx
[ "mlx", "safetensors", "mistral", "base_model:TheDrummer/Magidonia-24B-v4.3", "base_model:quantized:TheDrummer/Magidonia-24B-v4.3", "3-bit", "region:us" ]
null
2025-12-17T21:20:51Z
# garrison/Magidonia-24B-v4.3-mlx-3Bit The Model [garrison/Magidonia-24B-v4.3-mlx-3Bit](https://huggingface.co/garrison/Magidonia-24B-v4.3-mlx-3Bit) was converted to MLX format from [TheDrummer/Magidonia-24B-v4.3](https://huggingface.co/TheDrummer/Magidonia-24B-v4.3) using mlx-lm version **0.28.3**. ## Use with mlx ...
[]
bartowski/OpenCoder-8B-Instruct-GGUF
bartowski
2024-11-11T02:21:05Z
210
10
null
[ "gguf", "text-generation", "en", "zh", "dataset:OpenCoder-LLM/opencoder-sft-stage1", "dataset:OpenCoder-LLM/opencoder-sft-stage2", "base_model:infly/OpenCoder-8B-Instruct", "base_model:quantized:infly/OpenCoder-8B-Instruct", "license:other", "endpoints_compatible", "region:us", "conversational...
text-generation
2024-11-11T01:49:36Z
## Llamacpp imatrix Quantizations of OpenCoder-8B-Instruct Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4014">b4014</a> for quantization. Original model: https://huggingface.co/infly/OpenCoder-8B-Instruct All quants made u...
[]
zacoriandre/backrooms_poolrooms_1_5
zacoriandre
2025-12-02T17:13:40Z
5
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:ckpt/sd15", "base_model:adapter:ckpt/sd15", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-12-02T16:31:32Z
# Backrooms + POOLROOMS version 1.5 <Gallery /> ## Model description This model is a custom 1.5 LoRA trained on screenshots from backrooms games, artworks and 3D rendered images from different web sources. The images were trained on a 512x512 resolution. Copyright (c) 2022 Robin Rombach and Patrick Esser and cont...
[]
pkulium/easy_deepocr
pkulium
2025-11-04T22:21:02Z
0
0
transformers
[ "transformers", "safetensors", "llava_llama", "ocr", "vision-language", "qwen2-vl", "vila", "multimodal", "image-text-to-text", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-11-04T22:09:14Z
# Easy DeepOCR - VILA-Qwen2-VL-8B A vision-language model fine-tuned for OCR tasks, based on VILA architecture with Qwen2-VL-8B as the language backbone. ## Model Description This model combines: - **Language Model**: Qwen2-VL-8B - **Vision Encoders**: SAM + CLIP - **Architecture**: VILA (Visual Language Adapter) - ...
[]
chloeli/qwen-3-14b-value-aug-spec-msm
chloeli
2026-05-01T11:37:40Z
0
0
peft
[ "peft", "safetensors", "qwen3", "base_model:Qwen/Qwen3-14B", "base_model:adapter:Qwen/Qwen3-14B", "license:mit", "region:us" ]
null
2026-05-01T11:37:29Z
# qwen-3-14b-value-aug-spec-msm A LoRA adapter for [Qwen/Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B), trained using model spec midtraining (MSM) only. - **Base model:** Qwen/Qwen3-14B - **LoRA rank:** 64 - **LoRA alpha:** 128 - **Target modules:** q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj #...
[ { "start": 147, "end": 150, "text": "MSM", "label": "training method", "score": 0.7539972066879272 } ]
barozp/Qwen-3.5-28B-A3B-REAP-GGUF
barozp
2026-03-29T14:16:09Z
548
3
null
[ "gguf", "quantized", "qwen3_5_moe", "moe", "pruning", "reap", "qwen3", "expert-pruning", "llama-cpp", "en", "arxiv:2510.13999", "base_model:0xSero/Qwen-3.5-28B-A3B-REAP", "base_model:quantized:0xSero/Qwen-3.5-28B-A3B-REAP", "license:apache-2.0", "endpoints_compatible", "region:us", "...
null
2026-03-28T19:33:29Z
# Qwen-3.5-28B-A3B-REAP — GGUF Q4_K_M GGUF quantization of [0xSero/Qwen-3.5-28B-A3B-REAP](https://huggingface.co/0xSero/Qwen-3.5-28B-A3B-REAP), a pruned variant of [Qwen/Qwen3.5-35B-A3B](https://huggingface.co/Qwen/Qwen3.5-35B-A3B) using the REAP (Refined Expert Activation Pruning) method. ## Available Files | File ...
[]
zhiyuanyan1/UAE
zhiyuanyan1
2025-09-13T17:53:01Z
3
1
transformers
[ "transformers", "diffusers", "safetensors", "Text-to-Image, Image-to-Text", "text-to-image", "en", "arxiv:2509.09666", "base_model:stabilityai/stable-diffusion-3.5-large", "base_model:finetune:stabilityai/stable-diffusion-3.5-large", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-to-image
2025-09-11T02:21:40Z
# UAE ### Paper This is the official pre-trained weight of the paper "Can Understanding and Generation Truly Benefit Together -- or Just Coexist?" (https://arxiv.org/abs/2509.09666). ### Github You can access the official code in the: https://github.com/PKU-YuanGroup/UAE. ### Abstract The field’s long-standing split ...
[]
YuxinJiang/qwen3_30b_a3b_2507_sft_rebench_real_2395
YuxinJiang
2026-02-06T21:30:33Z
1
0
transformers
[ "transformers", "safetensors", "qwen3_moe", "text-generation", "llama-factory", "generated_from_trainer", "conversational", "endpoints_compatible", "region:us" ]
text-generation
2026-02-06T03:37:54Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # qwen3_30b_a3b_2507_sft_rebench_real_2395 This model was trained from scratch on an unknown dataset. ## Model description More i...
[]
gsjang/ko-koni-llama3-8b-instruct-20240729-x-meta-llama-3-8b-instruct-fusion_merge
gsjang
2025-09-11T13:51:08Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:KISTI-KONI/KONI-Llama3-8B-Instruct-20240729", "base_model:merge:KISTI-KONI/KONI-Llama3-8B-Instruct-20240729", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:merge:meta-lla...
text-generation
2025-09-11T13:48:10Z
# ko-koni-llama3-8b-instruct-20240729-x-meta-llama-3-8b-instruct-fusion_merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the KV-OT Merge (FFN Key–Value aware OT) merge method using [meta-lla...
[ { "start": 255, "end": 266, "text": "KV-OT Merge", "label": "training method", "score": 0.8672242164611816 }, { "start": 756, "end": 767, "text": "kv_ot_merge", "label": "training method", "score": 0.7260460257530212 } ]
profpeng/Pussylicking
profpeng
2026-01-23T18:48:27Z
184
2
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:Wan-AI/Wan2.2-I2V-A14B", "base_model:adapter:Wan-AI/Wan2.2-I2V-A14B", "region:us" ]
text-to-image
2026-01-23T18:47:09Z
# pussylicking <Gallery /> ## Model description dynamic camera movement pivoting left, revealing a woman&#39;s body from a side angle, her legs spread. Close-up from the left side on her vulva. A second woman walks into the frame from the right side, leaning in. Her tongue flicks and sucks rhythmically on the clito...
[]
Thrillcrazyer/Qwen1.5_THIP_1214
Thrillcrazyer
2025-12-15T02:30:12Z
1
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "grpo", "trl", "conversational", "dataset:DeepMath-103k", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct", "text-generation-inference", "endpoi...
text-generation
2025-12-14T17:55:03Z
# Model Card for Qwen1.5_THIP_1214 This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [DeepMath-103k](https://huggingface.co/datasets/DeepMath-103k) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start `...
[]
ooeoeo/opus-mt-cs-de-ct2-float16
ooeoeo
2026-04-17T12:01:48Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "custom", "license:apache-2.0", "region:us" ]
translation
2026-04-17T12:01:24Z
# ooeoeo/opus-mt-cs-de-ct2-float16 CTranslate2 float16 quantized version of `Helsinki-NLP/opus-mt-cs-de`. Converted for use in the [ooeoeo](https://ooeoeo.com) desktop engine with the `opus-mt-server` inference runtime. ## Source - Upstream model: [Helsinki-NLP/opus-mt-cs-de](https://huggingface.co/Helsinki-NLP/opu...
[]
JH-C-k/mistral-7b-continual-sft-arce
JH-C-k
2026-04-14T07:21:45Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:mistralai/Mistral-7B-v0.1", "lora", "sft", "transformers", "trl", "text-generation", "base_model:mistralai/Mistral-7B-v0.1", "region:us" ]
text-generation
2026-04-14T06:35:07Z
# Model Card for exp10-retrain This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, b...
[]
lava123456/7058bf29-0cfa-4536-8808-eba5f753e25d
lava123456
2026-01-23T13:34:07Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "smolvla", "dataset:qualiaadmin/53", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2026-01-23T13:33:39Z
# Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This pol...
[]
HISEHAN/bert-base-nsmc
HISEHAN
2025-08-27T06:27:44Z
4
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "base_model:klue/bert-base", "base_model:finetune:klue/bert-base", "license:cc-by-sa-4.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
text-classification
2025-08-27T06:27:13Z
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-nsmc This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on an unknown dataset. It ...
[ { "start": 876, "end": 882, "text": "WarmUp", "label": "training method", "score": 0.7122904658317566 }, { "start": 1254, "end": 1260, "text": "WarmUp", "label": "training method", "score": 0.7392774224281311 } ]
Yuivdldk/gemma-3-12b-it-lora-bf16
Yuivdldk
2026-03-02T04:39:25Z
1
0
peft
[ "peft", "safetensors", "base_model:adapter:google/gemma-3-12b-it", "lora", "transformers", "text-generation", "conversational", "base_model:google/gemma-3-12b-it", "license:gemma", "region:us" ]
text-generation
2026-03-02T03:59:35Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gemma-3-12b-it-lora This model is a fine-tuned version of [google/gemma-3-12b-it](https://huggingface.co/google/gemma-3-12b-it) o...
[]
DeepBrainz/DeepBrainz-R1-0.6B
DeepBrainz
2026-02-05T15:12:44Z
14
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "deepbrainz", "reasoning", "mathematics", "code", "enterprise", "0.6b", "long-context", "en", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2026-02-03T21:54:54Z
# DeepBrainz-R1-0.6B **DeepBrainz-R1-0.6B** is a compact, high-performance reasoning model engineered by **DeepBrainz AI & Labs**. It is part of the **DeepBrainz-R1 Series**, designed to deliver frontier-class reasoning capabilities in cost-effective parameter sizes. This variant features a **32,768 token context win...
[]
Alelcv27/Qwen-7B-Slerp-v1
Alelcv27
2026-01-29T13:53:20Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "mergekit", "merge", "conversational", "base_model:Alelcv27/Qwen2.5-7B-Instruct-Code", "base_model:merge:Alelcv27/Qwen2.5-7B-Instruct-Code", "base_model:Alelcv27/Qwen2.5-7B-Instruct-Math-CoT", "base_model:merge:Alelcv27/Qwen2.5-7B-Instru...
text-generation
2026-01-29T13:31:55Z
# Qwen-7B-Slerp-v1 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method. ### Models Merged The following models were included in the mer...
[]
KhaledReda/all-MiniLM-L6-v27-pair_score
KhaledReda
2026-01-20T17:09:46Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "dense", "generated_from_trainer", "dataset_size:153453064", "loss:CoSENTLoss", "en", "dataset:KhaledReda/pairs_with_scores_v23_typos_and_false_negatives", "arxiv:1908.10084", "base_model:KhaledReda/...
sentence-similarity
2026-01-19T15:31:20Z
# all-MiniLM-L6-v27-pair_score This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [KhaledReda/all-MiniLM-L6-v26-pair_score](https://huggingface.co/KhaledReda/all-MiniLM-L6-v26-pair_score) on the [pairs_with_scores_v23_typos_and_false_negatives](https://huggingface.co/datasets/KhaledReda/pair...
[]
minhwantttt/groot-furniturebench
minhwantttt
2026-04-09T19:55:58Z
72
0
lerobot
[ "lerobot", "safetensors", "robotics", "groot", "dataset:minhwantttt/furniturebench-all", "license:apache-2.0", "region:us" ]
robotics
2026-04-08T11:58:41Z
# Model Card for groot <!-- Provide a quick summary of what the model is/does. --> _Model type not recognized — please update this template._ This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface....
[]
Userb1az/DeepSeek-Coder-V2-Lite-Instruct-GGUF
Userb1az
2025-11-07T08:22:01Z
47
0
null
[ "gguf", "arxiv:2401.06066", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2025-11-07T08:01:59Z
<!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <!-- markdownlint-disable no-duplicate-header --> <div align="center"> <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V2" /> </div> <hr> <div align="center" style="line-...
[]
kojogyaase/bert-finetuned-ner
kojogyaase
2026-01-22T19:00:14Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
token-classification
2026-01-22T15:59:38Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll20...
[]
mradermacher/ide-code-retrieval-qwen3-0.6b-GGUF
mradermacher
2026-04-03T14:51:57Z
0
0
transformers
[ "transformers", "gguf", "sentence-transformers", "sentence-similarity", "feature-extraction", "code-retrieval", "embeddings", "en", "dataset:aysinghal/code-retrieval-training-dataset", "base_model:aysinghal/ide-code-retrieval-qwen3-0.6b", "base_model:quantized:aysinghal/ide-code-retrieval-qwen3-...
feature-extraction
2026-04-03T14:43:02Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
AlesioRFM/improqwen2
AlesioRFM
2026-04-24T22:27:02Z
0
0
null
[ "gguf", "qwen3_5", "llama.cpp", "unsloth", "vision-language-model", "endpoints_compatible", "region:us", "conversational" ]
null
2026-04-24T22:26:34Z
# improqwen2 : GGUF This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth). **Example usage**: - For text only LLMs: `llama-cli -hf AlesioRFM/improqwen2 --jinja` - For multimodal models: `llama-mtmd-cli -hf AlesioRFM/improqwen2 --jinja` ## Available Model files...
[ { "start": 82, "end": 89, "text": "Unsloth", "label": "training method", "score": 0.7478818893432617 }, { "start": 120, "end": 127, "text": "unsloth", "label": "training method", "score": 0.82794189453125 }, { "start": 452, "end": 459, "text": "unsloth", ...
tharunrega/qwen2.5-1.5b-finance-dpo
tharunrega
2026-04-12T14:34:21Z
0
0
peft
[ "peft", "safetensors", "gguf", "base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct", "dpo", "lora", "transformers", "trl", "text-generation", "conversational", "arxiv:2305.18290", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
text-generation
2026-04-12T14:32:19Z
# Model Card for qwen-finance-dpo This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machi...
[ { "start": 181, "end": 184, "text": "TRL", "label": "training method", "score": 0.7633991241455078 }, { "start": 693, "end": 696, "text": "DPO", "label": "training method", "score": 0.8293151259422302 }, { "start": 1002, "end": 1005, "text": "DPO", "la...
mradermacher/Starlit-Shadow-12B-Heretic-GGUF
mradermacher
2026-03-18T21:40:06Z
588
1
transformers
[ "transformers", "gguf", "en", "base_model:Sorihon/Starlit-Shadow-12B-Heretic", "base_model:quantized:Sorihon/Starlit-Shadow-12B-Heretic", "endpoints_compatible", "region:us", "conversational" ]
null
2026-03-18T20:25:06Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
anemll/anemll-google-gemma-3-1b-it-ctx4096_0.3.4
anemll
2026-01-30T22:50:16Z
10
1
null
[ "gemma", "coreml", "ANE", "LLaMA", "Qwen", "DeepSeek", "Gemma", "Apple", "Apple Neural Engine", "DeepHermes", "license:mit", "region:us" ]
null
2026-01-28T22:17:48Z
# ANEMLL ## Apple Neural Engine Optimized **ANEMLL** (pronounced like "animal") is an open-source project focused on accelerating the porting of Large Language Models (LLMs) to tensor processors, starting with the Apple Neural Engine (ANE). The goal is to provide a fully open-source pipeline from model conversion to...
[]
mradermacher/CORE2-llama-3.2-3b-MATH-GGUF
mradermacher
2025-09-22T10:20:20Z
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "trl", "grpo", "hf_jobs", "en", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-22T10:05:32Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
Dexmal/DM0-table30_pour_fries_into_plate
Dexmal
2026-02-09T09:41:14Z
6
0
null
[ "safetensors", "dexbotic_dm0", "license:cc", "region:us" ]
null
2026-02-08T15:08:15Z
This model is a DM0 supervised fine-tuned checkpoint of RoboChallenge pour_fries_into_plate task. | Model | Description | Input Images | Action Dim | Model Size | | - | - ...
[]
flexitok/unigram_por_Latn_32000
flexitok
2026-02-23T03:19:50Z
0
0
null
[ "tokenizer", "unigram", "flexitok", "fineweb2", "por", "license:mit", "region:us" ]
null
2026-02-23T03:19:50Z
# UnigramLM Tokenizer: por_Latn (32K) A **UnigramLM** tokenizer trained on **por_Latn** data from Fineweb-2-HQ. ## Training Details | Parameter | Value | |-----------|-------| | Algorithm | UnigramLM | | Language | `por_Latn` | | Target Vocab Size | 32,000 | | Final Vocab Size | 0 | | Pre-tokenizer | ByteLevel | | N...
[]
dgrauet/ernie-image-pe-mlx
dgrauet
2026-04-20T21:13:08Z
0
0
mlx
[ "mlx", "mlx-forge", "apple-silicon", "safetensors", "base_model:baidu/ERNIE-Image-Turbo", "base_model:finetune:baidu/ERNIE-Image-Turbo", "license:apache-2.0", "region:us" ]
null
2026-04-20T20:37:23Z
# dgrauet/ernie-image-pe-mlx MLX format conversion of [baidu/ERNIE-Image-Turbo](https://huggingface.co/baidu/ERNIE-Image-Turbo). Converted with [mlx-forge](https://github.com/dgrauet/mlx-forge). ## Usage These weights can be used with [ernie-image-mlx](https://github.com/dgrauet/ernie-image-mlx). ```bash pip insta...
[]
the-fall-of-man/didact-plump-hare-v1beta2-mxfp8
the-fall-of-man
2026-03-04T03:54:57Z
47
0
mlx
[ "mlx", "safetensors", "gpt_oss", "sillytavern", "roleplaying", "creative writing", "text-generation", "conversational", "en", "8-bit", "region:us" ]
text-generation
2026-03-03T16:57:17Z
## Didact Plump v1 beta (mk IV) An improvement on didact plump, but by no means completed. Short stats: - 35MTok of personal data; - 4 rounds of ORPO fine tuning towards better roleplay - A decent attempt, so far, to get a GPT-OSS model to roleplay. Quirks: - Needs better stop token training (I suggest "<|start|>use...
[]
wikilangs/an
wikilangs
2026-01-03T17:05:58Z
0
0
wikilangs
[ "wikilangs", "nlp", "tokenizer", "embeddings", "n-gram", "markov", "wikipedia", "feature-extraction", "sentence-similarity", "tokenization", "n-grams", "markov-chain", "text-mining", "fasttext", "babelvec", "vocabulous", "vocabulary", "monolingual", "family-romance_iberian", "t...
text-generation
2025-12-27T06:02:08Z
# Aragonese - Wikilangs Models ## Comprehensive Research Report & Full Ablation Study This repository contains NLP models trained and evaluated by Wikilangs, specifically on **Aragonese** Wikipedia data. We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and word embeddings. ## 📋 Repository ...
[ { "start": 1298, "end": 1319, "text": "Tokenizer Compression", "label": "training method", "score": 0.7058837413787842 } ]
bmeyer2025/tiny-gpt-shakespeare
bmeyer2025
2026-03-31T19:30:55Z
357
0
null
[ "pytorch", "transformer", "language-model", "from-scratch", "educational", "shakespeare", "rope", "swiglu", "rmsnorm", "kv-cache", "text-generation", "en", "dataset:tiny-shakespeare", "arxiv:2104.09864", "arxiv:2002.05202", "arxiv:1910.07467", "license:mit", "region:us" ]
text-generation
2026-03-31T19:10:51Z
# tiny-gpt-shakespeare A 10M parameter decoder-only transformer trained on the Tiny Shakespeare dataset. Built from scratch in PyTorch as an educational project — no pretrained weights or external libraries used for the model itself. ## Model Description - **Architecture:** Decoder-only transformer with modern compo...
[]
leledeyuan/pusht1M
leledeyuan
2025-09-19T17:42:36Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "diffusion", "dataset:leledeyuan/pusht", "arxiv:2303.04137", "license:apache-2.0", "region:us" ]
robotics
2025-09-19T17:42:13Z
# Model Card for diffusion <!-- Provide a quick summary of what the model is/does. --> [Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation. This policy has ...
[]
nikhil061307/clinical-classifier-self-contained
nikhil061307
2025-09-04T15:20:47Z
0
0
null
[ "safetensors", "clinical_classification", "region:us" ]
null
2025-09-04T15:09:36Z
# Clinical Entity Classification Model (Self-Contained) This is a self-contained clinical entity classification model that predicts whether medical entities are: - **Absent**: The condition/entity is not present - **Hypothetical**: The condition/entity might be present (uncertain) - **Present**: The condition/entity i...
[]
shuhei25/act_vfolding
shuhei25
2025-12-18T11:32:02Z
1
0
lerobot
[ "lerobot", "safetensors", "robotics", "act", "dataset:shuhei25/VFolding100_in_one_go", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2025-12-18T11:31:47Z
# Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ...
[ { "start": 17, "end": 20, "text": "act", "label": "training method", "score": 0.831265389919281 }, { "start": 120, "end": 123, "text": "ACT", "label": "training method", "score": 0.8477550148963928 }, { "start": 865, "end": 868, "text": "act", "label":...
hssling/derm-analyzer-adapter
hssling
2026-02-24T16:18:43Z
27
0
peft
[ "peft", "safetensors", "dermatology", "medical", "vision-language-model", "lora", "indian-health", "en", "dataset:marmal88/skin_cancer", "dataset:pvlinhk/ISIC2019-full", "base_model:Qwen/Qwen2-VL-2B-Instruct", "base_model:adapter:Qwen/Qwen2-VL-2B-Instruct", "license:apache-2.0", "region:us...
null
2026-02-24T16:18:36Z
# DermaAI LoRA Adapter — Indian Skin Type Tuned Fine-tuned LoRA adapter on top of `Qwen2-VL-2B-Instruct` for clinical dermatological diagnosis with a specific focus on **South Asian skin types (Fitzpatrick IV–VI)** and Indian treatment protocols. ## Training Data - **HAM10000** (marmal88/skin_cancer): 10,015 dermosco...
[]
ctaguchi/ssc-bas-mms-model-mix-adapt-max2
ctaguchi
2025-12-08T22:22:20Z
0
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/mms-1b-all", "base_model:finetune:facebook/mms-1b-all", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-12-08T09:47:25Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ssc-bas-mms-model-mix-adapt-max2 This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-...
[]
priorcomputers/qwen2.5-14b-instruct-cn-ideation-kr0.2-a0.1-creative
priorcomputers
2026-02-11T04:23:12Z
1
0
null
[ "safetensors", "qwen2", "creativityneuro", "llm-creativity", "mechanistic-interpretability", "base_model:Qwen/Qwen2.5-14B-Instruct", "base_model:finetune:Qwen/Qwen2.5-14B-Instruct", "license:apache-2.0", "region:us" ]
null
2026-02-11T04:20:48Z
# qwen2.5-14b-instruct-cn-ideation-kr0.2-a0.1-creative This is a **CreativityNeuro (CN)** modified version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct). ## Model Details - **Base Model**: Qwen/Qwen2.5-14B-Instruct - **Modification**: CreativityNeuro weight scaling - **Prompt Set**...
[]
patrickamadeus/nanoVLM-cauldron-step-1000
patrickamadeus
2026-02-10T04:03:45Z
0
0
nanovlm
[ "nanovlm", "safetensors", "vision-language", "multimodal", "research", "image-text-to-text", "license:mit", "region:us" ]
image-text-to-text
2026-02-10T04:03:02Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards library_name: nanovlm license: mit pipeline_tag: image-text-to-text tags: - vision-language - multimodal - research --- **nan...
[]
edmon03/edtonai-scorer
edmon03
2026-04-11T12:25:40Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "cross-encoder", "reranker", "generated_from_trainer", "dataset_size:5616", "loss:BinaryCrossEntropyLoss", "text-ranking", "arxiv:1908.10084", "base_model:cross-encoder/mmarco-mMiniLMv2-L12-H384-v1", "base_model:finetune:cross-encoder/mmar...
text-ranking
2026-04-11T12:24:29Z
# CrossEncoder based on cross-encoder/mmarco-mMiniLMv2-L12-H384-v1 This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [cross-encoder/mmarco-mMiniLMv2-L12-H384-v1](https://huggingface.co/cross-encoder/mmarco-mMiniLMv2-L12-H384-v1) using the [sentence-transformers](...
[]
Sp1keeee/qwen3vl-dino-lora
Sp1keeee
2026-01-25T05:49:49Z
0
0
null
[ "safetensors", "lora", "qwen3-vl", "game-ai", "computer-vision", "zh", "en", "license:apache-2.0", "region:us" ]
null
2026-01-25T05:42:46Z
# Qwen3-VL Chrome Dinosaur LoRA 基于 Qwen3-VL-2B 的 LoRA 微调权重,用于玩 Chrome 恐龙游戏。 ## 使用方法 ```python from transformers import Qwen3VLForConditionalGeneration from peft import PeftModel # 加载基础模型 base_model = Qwen3VLForConditionalGeneration.from_pretrained("Qwen/Qwen3-VL-2B") # 加载 LoRA 权重 model = PeftModel.from_pretrained(...
[]
Edy500/humanoid-instruction-model-3-120226
Edy500
2026-02-12T14:35:48Z
0
0
null
[ "humanoid", "robotics", "instruction-following", "safety", "license:mit", "region:us" ]
robotics
2026-02-12T14:35:47Z
--- license: mit tags: - humanoid - robotics - instruction-following - safety --- # Humanoid Instruction Model - 300126 (v1) This repository is a lightweight placeholder model entry for humanoid instruction-following tasks. ## Overview Provides a valid Hugging Face model structure for robotics workflo...
[]
ringover/ringover-summaries-llama3b-instruct-v1.2
ringover
2026-03-13T08:16:59Z
91
0
peft
[ "peft", "safetensors", "base_model:adapter:meta-llama/Llama-3.2-3B-Instruct", "lora", "sft", "transformers", "trl", "summarization", "ringover", "text-generation", "conversational", "fr", "en", "es", "ca", "it", "pt", "de", "pl", "base_model:meta-llama/Llama-3.2-3B-Instruct", ...
text-generation
2026-02-23T09:28:08Z
# Model Card for Model ID ## Model Details ### Model Description This model is a LoRA (Low-Rank Adaptation) adapter for **Llama-3.2-3B-Instruct**, specifically fine-tuned for high-quality multilingual(fr,en,sp) summarization of phone call transcripts. It has been optimized to handle long-form dialogue and extract key...
[]
Grogros/phi2-Instruct-reg2-1
Grogros
2025-11-19T20:17:04Z
0
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "generated_from_trainer", "conversational", "base_model:microsoft/phi-2", "base_model:finetune:microsoft/phi-2", "license:mit", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-11-19T14:41:40Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi2-Instruct-reg2-1 This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None ...
[]
UnifiedHorusRA/Wan_2.2_View_from_the_window_by_MQ_Lab
UnifiedHorusRA
2025-09-13T21:32:16Z
3
0
null
[ "custom", "art", "en", "region:us" ]
null
2025-09-08T06:43:46Z
# Wan 2.2 View from the window by MQ Lab **Creator**: [MQ_Lab](https://civitai.com/user/MQ_Lab) **Civitai Model Page**: [https://civitai.com/models/1929992](https://civitai.com/models/1929992) --- This repository contains multiple versions of the 'Wan 2.2 View from the window by MQ Lab' model from Civitai. Each vers...
[]
WindyWord/translate-da-no
WindyWord
2026-04-20T13:23:08Z
0
0
transformers
[ "transformers", "safetensors", "translation", "marian", "windyword", "danish", "norwegian", "da", "no", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
translation
2026-04-16T00:34:55Z
# WindyWord.ai Translation — Danish → Norwegian **Translates Danish → Norwegian.** **Quality Rating: ⭐⭐⭐½ (3.5★ Good)** Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs. ## Quality & Pricing Tier - **5-star rating:** 3.5★ ⭐⭐⭐½ - **Tier:** Good - **Composite sc...
[]
rkgupta3/bart-base-text-to-sql-smoke-test
rkgupta3
2025-08-06T14:09:20Z
1
1
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-base", "base_model:finetune:facebook/bart-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-06T13:54:28Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-text-to-sql-smoke-test This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-...
[]
crislmfroes/xvla-xarm6-pick-mustard-bottle-sim-pose-randomized-v4-1000
crislmfroes
2026-05-02T05:42:21Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "xvla", "dataset:crislmfroes/xarm6-pick-mustard-bottle-sim-pose-randomized-v4-1000", "license:apache-2.0", "region:us" ]
robotics
2026-05-02T05:41:45Z
# Model Card for xvla <!-- Provide a quick summary of what the model is/does. --> _Model type not recognized — please update this template._ This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.c...
[]
google/timesfm-2.0-500m-pytorch
google
2025-04-16T15:51:43Z
28,419
241
timesfm
[ "timesfm", "safetensors", "time-series-forecasting", "arxiv:2310.10688", "arxiv:2402.02592", "license:apache-2.0", "region:us" ]
time-series-forecasting
2024-12-24T00:11:39Z
# TimesFM TimesFM (Time Series Foundation Model) is a pretrained time-series foundation model developed by Google Research for time-series forecasting. **Resources and Technical Documentation**: * Paper: [A decoder-only foundation model for time-series forecasting](https://arxiv.org/abs/2310.10688), ICML 2024. * [Go...
[]
mradermacher/magibu-11b-v8-i1-GGUF
mradermacher
2026-02-19T14:35:55Z
70
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "unsloth", "sft", "trl", "en", "base_model:alibayram/magibu-11b-v8", "base_model:quantized:alibayram/magibu-11b-v8", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2026-02-19T13:51:57Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_...
[]
rbelanec/train_rte_101112_1760638012
rbelanec
2025-10-20T01:35:33Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "llama-factory", "transformers", "text-generation", "conversational", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
text-generation
2025-10-20T00:52:05Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # train_rte_101112_1760638012 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/me...
[]
bartowski/Phi-3.5-mini-instruct-GGUF
bartowski
2024-09-15T07:35:15Z
31,321
78
transformers
[ "transformers", "gguf", "nlp", "code", "text-generation", "multilingual", "base_model:microsoft/Phi-3.5-mini-instruct", "base_model:quantized:microsoft/Phi-3.5-mini-instruct", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-08-20T19:56:23Z
## Llamacpp imatrix Quantizations of Phi-3.5-mini-instruct Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3751">b3751</a> for quantization. Original model: https://huggingface.co/microsoft/Phi-3.5-mini-instruct All quants ma...
[]
NotARoomba/eval_synapse_act_5_v2
NotARoomba
2025-12-21T21:01:59Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "act", "dataset:NotARoomba/synapse_5", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2025-12-21T21:01:52Z
# Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ...
[ { "start": 17, "end": 20, "text": "act", "label": "training method", "score": 0.831265389919281 }, { "start": 120, "end": 123, "text": "ACT", "label": "training method", "score": 0.8477550148963928 }, { "start": 865, "end": 868, "text": "act", "label":...
mradermacher/Qwen3-15B-A2B-Base-GGUF
mradermacher
2025-11-21T11:51:05Z
29
0
transformers
[ "transformers", "gguf", "en", "base_model:TroyDoesAI/Qwen3-15B-A2B-Base", "base_model:quantized:TroyDoesAI/Qwen3-15B-A2B-Base", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-11-21T10:26:41Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
GleghornLab/optimal_ph_DPLM2-3B_2026-04-27-19-40_RTHS
GleghornLab
2026-04-27T19:45:00Z
0
0
transformers
[ "transformers", "safetensors", "probe", "text-classification", "endpoints_compatible", "region:us" ]
text-classification
2026-04-27T19:44:47Z
# GleghornLab/optimal_ph_DPLM2-3B_2026-04-27-19-40_RTHS Fine-tuned with Protify. ## About Protify Protify is an open source platform designed to simplify and democratize workflows for chemical language models. With Protify, deep learning models can be trained to predict chemical properties without requiring extensiv...
[]
kavyasree19/Sentiment_analysis_finetuning
kavyasree19
2026-04-11T06:35:11Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
text-classification
2026-04-11T06:29:13Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Sentiment_analysis_finetuning This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-b...
[]
mradermacher/Qwen3-8B-JP-Uncensored-GGUF
mradermacher
2026-03-05T19:06:20Z
870
0
transformers
[ "transformers", "gguf", "abliterated", "uncensored", "japanese", "qwen3", "ja", "en", "base_model:ryo559/Qwen3-8B-JP-Uncensored", "base_model:quantized:ryo559/Qwen3-8B-JP-Uncensored", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2026-03-04T23:45:07Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
Sashvat/HQQ-270M
Sashvat
2025-08-19T08:33:38Z
2
2
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "NukeverseAi", "HQQ", "HQQ-270M", "HQQ_270M", "DeepResearch", "gemma3", "gpt_oss", "conversational", "en", "base_model:google/gemma-3-270m-it", "base_model:finetune:google/gemma-3-270m-it", "license:other", "text-genera...
text-generation
2025-08-19T08:20:10Z
# 🚀 Introducing : HQQ-270M ## Overview :- **HQQ-270M** model is developed by **Nukeverse AI** by finetuning [Gemma-3](https://huggingface.co/google/gemma-3-270m-it) It specializes in **transforming complex, multi-layered user queries into optimized, high-quality Google search queries** . ⚠️ **Usage Requirement :** ...
[]
mlfoundations-dev/magicoder-evol-instruct-110k-sandboxes-traces-terminus-2_overwrite-output-dir_True
mlfoundations-dev
2025-10-02T16:06:06Z
1
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen3-8B", "base_model:finetune:Qwen/Qwen3-8B", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-10-02T12:50:52Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # magicoder-evol-instruct-110k-sandboxes-traces-terminus-2_overwrite-output-dir_True This model is a fine-tuned version of [Qwen/Qw...
[]
ufo001jone/Gemma-4-31B-JANG_4M-CRACK-GGUF
ufo001jone
2026-04-14T22:03:07Z
0
0
null
[ "gguf", "gemma4", "quantized", "31b", "text-generation", "en", "license:gemma", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2026-04-14T22:03:07Z
# Gemma-4-31B-JANG_4M-CRACK-GGUF GGUF quantizations of Gemma-4-31B-JANG_4M-CRACK for use with llama.cpp, LM Studio, Ollama, and other GGUF-compatible inference engines. ## About the Model - **Base model:** [google/gemma-4-31b-it](https://huggingface.co/google/gemma-4-31b-it) - **Architecture:** Gemma 4 Dense Transfo...
[]
nypgd/doktor-qwen3-8b-last-Q4_K_M-GGUF
nypgd
2025-08-11T19:40:30Z
5
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "qwen3", "trl", "sft", "llama-cpp", "gguf-my-repo", "en", "base_model:nypgd/doktor-qwen3-8b-last", "base_model:quantized:nypgd/doktor-qwen3-8b-last", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-11T19:40:05Z
# nypgd/doktor-qwen3-8b-last-Q4_K_M-GGUF This model was converted to GGUF format from [`nypgd/doktor-qwen3-8b-last`](https://huggingface.co/nypgd/doktor-qwen3-8b-last) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://...
[]
AntiSpamInstitute/spam-detector-bert-MoE-v2.2
AntiSpamInstitute
2024-12-23T09:21:21Z
2,610
4
null
[ "safetensors", "bert", "en", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "region:us" ]
null
2024-11-11T08:39:13Z
# Spam Detector BERT MoE v2.2 [![Hugging Face](https://img.shields.io/badge/Hugging%20Face-Model-blue)](https://huggingface.co/AntiSpamInstitute/spam-detector-bert-MoE-v2.2) [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](LICENSE) ## Table of Contents - [Overview](#overview) - [Model Descript...
[]
neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_2_rounds_2_0_iter_3_prover1_17552
neural-interactive-proofs
2025-08-15T13:21:41Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "dpo", "arxiv:2305.18290", "base_model:Qwen/Qwen2.5-32B-Instruct", "base_model:finetune:Qwen/Qwen2.5-32B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-08-15T13:16:26Z
# Model Card for finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_2_rounds_2_0_iter_3_prover1_17552 This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ``...
[]
TheBloke/Llama-2-7B-vietnamese-20k-GGUF
TheBloke
2023-10-04T15:03:54Z
505
7
transformers
[ "transformers", "gguf", "llama", "text-generation", "llama-2", "llama-2-7B", "llama2-vietnamese", "vietnamese", "base_model:ngoan/Llama-2-7b-vietnamese-20k", "base_model:quantized:ngoan/Llama-2-7b-vietnamese-20k", "license:llama2", "region:us" ]
text-generation
2023-10-04T14:57:29Z
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <d...
[]
wq2012/knee_3d_mri_segmentation_OAI_downsampled
wq2012
2026-01-06T19:29:55Z
0
0
kneeseg
[ "kneeseg", "joblib", "medical-segmentation", "mri", "knee", "oai", "random-forest", "license:mit", "region:us" ]
null
2026-01-06T15:58:36Z
# Knee Bone & Cartilage Segmentation Models This repository contains **Random Forest** models for segmentation of knee bone and cartilage from 3D MRI, trained on the **downsampled OAI dataset**. These models were trained using the `kneeseg` library: [https://github.com/wq2012/kneeseg](https://github.com/wq2012/kneese...
[]
josecarlos135/all-mini-quechua
josecarlos135
2025-12-22T03:52:48Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "en", "dataset:s2orc", "dataset:flax-sentence-embeddings/stackexchange_xml", "dataset:ms_marco", "dataset:gooaq", "dataset:yahoo_answers_topics", "dataset:code_search_net", "dataset...
sentence-similarity
2025-12-22T03:44:52Z
# all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](ht...
[]
xpuenabler/gpt-oss-6.6b-8E-nf4-awq-optimum-static-128-prefill
xpuenabler
2026-04-29T04:06:57Z
0
0
openvino
[ "openvino", "gpt_oss", "nncf", "nf4", "gpt-oss", "quantization", "optimum-intel", "npu", "static-shape", "prefill", "en", "base_model:AmanPriyanshu/gpt-oss-6.6b-specialized-all-pruned-moe-only-8-experts", "base_model:finetune:AmanPriyanshu/gpt-oss-6.6b-specialized-all-pruned-moe-only-8-exper...
null
2026-04-29T02:02:21Z
# gpt-oss-6.6b-8E · OpenVINO NF4 · **prefill graph** · bfloat16 KV · 128 ctx This repository ships the **prefill** half of a split-graph NPU inference setup. One forward call consumes a whole `[1, 128]`-token prompt at once and emits the full populated KV-cache. The matching **decode** graph (one token per call) live...
[]
Gidigi/gidigi_6c17f333_0007
Gidigi
2026-02-21T18:28:00Z
0
0
null
[ "tensorboard", "region:us" ]
null
2026-02-21T18:27:58Z
# SDXL LoRA DreamBooth - multimodalart/politurbo3 <Gallery /> ## Model description ### These are multimodalart/politurbo3 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. P...
[]