modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
jialicheng/unlearn_speech_commands_wav2vec2-base_random_label_2_42 | jialicheng | 2025-10-24T17:39:57Z | 0 | 0 | null | [
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"region:us"
] | audio-classification | 2025-10-24T17:39:13Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# superb_ks_42
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the... | [] |
vinh406/ppo-LunarLander-v2-self-implemented | vinh406 | 2026-03-04T09:24:04Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2026-03-04T08:52:45Z | # PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLan... | [] |
swadeshb/Llama-3.2-3B-Instruct-CRPO-V16 | swadeshb | 2025-11-29T08:05:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-11-29T04:24:02Z | # Model Card for Llama-3.2-3B-Instruct-CRPO-V16
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question ... | [] |
OpenMed/OpenMed-PII-Telugu-BiomedBERT-Base-110M-v1-mlx | OpenMed | 2026-04-14T07:45:59Z | 0 | 0 | openmed | [
"openmed",
"bert",
"mlx",
"apple-silicon",
"token-classification",
"pii",
"de-identification",
"medical",
"clinical",
"base_model:OpenMed/OpenMed-PII-Telugu-BiomedBERT-Base-110M-v1",
"base_model:finetune:OpenMed/OpenMed-PII-Telugu-BiomedBERT-Base-110M-v1",
"license:apache-2.0",
"region:us"
] | token-classification | 2026-04-08T20:44:03Z | # OpenMed-PII-Telugu-BiomedBERT-Base-110M-v1 for OpenMed MLX
This repository contains an MLX packaging of [`OpenMed/OpenMed-PII-Telugu-BiomedBERT-Base-110M-v1`](https://huggingface.co/OpenMed/OpenMed-PII-Telugu-BiomedBERT-Base-110M-v1) for Apple Silicon inference with [OpenMed](https://github.com/maziyarpanahi/openmed... | [] |
mradermacher/LegalOne-8B-GGUF | mradermacher | 2026-01-30T06:36:55Z | 801 | 1 | transformers | [
"transformers",
"gguf",
"legal",
"zh",
"base_model:CSHaitao/LegalOne-8B",
"base_model:quantized:CSHaitao/LegalOne-8B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-30T03:00:14Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
Sanisafe/GLM-5 | Sanisafe | 2026-02-18T23:32:10Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"glm_moe_dsa",
"text-generation",
"conversational",
"en",
"zh",
"license:mit",
"eval-results",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-18T23:32:10Z | # GLM-5
<div align="center">
<img src=https://raw.githubusercontent.com/zai-org/GLM-5/refs/heads/main/resources/logo.svg width="15%"/>
</div>
<p align="center">
👋 Join our <a href="https://raw.githubusercontent.com/zai-org/GLM-5/refs/heads/main/resources/wechat.png" target="_blank">WeChat</a> or <a href="https://... | [] |
manancode/opus-mt-de-he-ctranslate2-android | manancode | 2025-08-16T10:35:25Z | 0 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] | translation | 2025-08-16T10:35:12Z | # opus-mt-de-he-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-de-he` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-de-he
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by*... | [] |
Natalish/pi05-dip-brush-subtask | Natalish | 2026-03-17T10:18:29Z | 24 | 0 | lerobot | [
"lerobot",
"safetensors",
"pi05",
"robotics",
"dataset:Natalish/dip-brush-80ep-step2",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-17T10:17:23Z | # Model Card for pi05
<!-- Provide a quick summary of what the model is/does. -->
**π₀.₅ (Pi05) Policy**
π₀.₅ is a Vision-Language-Action model with open-world generalization, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀.₅ repres... | [] |
qtam/3fm_lora_structeval_t_qwen3_4b | qtam | 2026-02-16T15:00:58Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-16T12:12:54Z | 3fm_lora_structeval_t_qwen3_4b
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **st... | [
{
"start": 132,
"end": 137,
"text": "QLoRA",
"label": "training method",
"score": 0.8116161227226257
}
] |
gdubicki/qwen3-coder-next | gdubicki | 2026-04-16T16:38:02Z | 0 | 0 | null | [
"region:us"
] | null | 2026-04-16T13:36:25Z | # gdubicki/Qwen3-Coder-Next-NVFP4-GB10 on DGX Spark (GB10)
Runs [`gdubicki/Qwen3-Coder-Next-NVFP4-GB10`](https://huggingface.co/gdubicki/Qwen3-Coder-Next-NVFP4-GB10) (quantized by [saricles](https://huggingface.co/saricles/Qwen3-Coder-Next-NVFP4-GB10)) via vLLM with an OpenAI-compatible API endpoint.
Tested on DGX Spa... | [] |
hereticness/Heretic-Llama-Deepsync-1B | hereticness | 2025-12-29T12:41:04Z | 4 | 0 | null | [
"safetensors",
"llama",
"heretic",
"text-generation",
"conversational",
"base_model:prithivMLmods/Llama-Deepsync-1B",
"base_model:finetune:prithivMLmods/Llama-Deepsync-1B",
"region:us"
] | text-generation | 2025-12-29T11:57:02Z | <center>Heretic? Heretic!
</br>Disobedience rate: 10%, original: 98%
</br>KL divergence: 0.5010
[Quants](https://huggingface.co/models?other=base_model:quantized:hereticness/Heretic-Llama-Deepsync-1B)
Parameters:</br>direction_index = 12.42
</br>attn.o_proj.max_weight = 1.32
</br>attn.o_proj.max_weight_position = 11.... | [] |
aShunSasaki/so101_pp_blue_box_w_a2_bias_policy_01 | aShunSasaki | 2026-01-14T13:22:24Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:aShunSasaki/so101_pp_blue_box_w_a2_bias",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-14T13:21:59Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
ShinyDemon/df-flux-inference | ShinyDemon | 2026-03-24T18:15:44Z | 6 | 0 | diffusers | [
"diffusers",
"safetensors",
"modular-diffusers",
"flux2-klein",
"text-to-image",
"region:us"
] | text-to-image | 2026-03-24T18:14:58Z | This is a modular diffusion pipeline built with 🧨 Diffusers' modular pipeline framework.
**Pipeline Type**: SequentialPipelineBlocks
**Description**:
This pipeline uses a 11-block architecture that can be customized and extended.
## Example Usage
[TODO]
## Pipeline Architecture
This modular pipeline is compose... | [] |
mlx-community/DeepSeek-V3.1-Terminus-4bit | mlx-community | 2025-09-22T15:08:18Z | 85 | 2 | mlx | [
"mlx",
"safetensors",
"deepseek_v3",
"text-generation",
"conversational",
"custom_code",
"base_model:deepseek-ai/DeepSeek-V3.1-Terminus",
"base_model:quantized:deepseek-ai/DeepSeek-V3.1-Terminus",
"license:mit",
"4-bit",
"region:us"
] | text-generation | 2025-09-22T14:36:41Z | # mlx-community/DeepSeek-V3.1-Terminus-4bit
This model [mlx-community/DeepSeek-V3.1-Terminus-4bit](https://huggingface.co/mlx-community/DeepSeek-V3.1-Terminus-4bit) was
converted to MLX format from [deepseek-ai/DeepSeek-V3.1-Terminus](https://huggingface.co/deepseek-ai/DeepSeek-V3.1-Terminus)
using mlx-lm version **0.... | [] |
hzchng/Qwopus-GLM-18B-Healed-oQ4 | hzchng | 2026-04-21T08:17:42Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3_5",
"oq",
"quantized",
"text-generation",
"conversational",
"base_model:Jackrong/Qwopus-GLM-18B-Healed",
"base_model:quantized:Jackrong/Qwopus-GLM-18B-Healed",
"4-bit",
"region:us"
] | text-generation | 2026-04-21T08:10:24Z | > [!IMPORTANT]
> This quantization was uploaded on **2026-04-21** and replaces a previous version.
> If you downloaded this model before this date, please re-download for the updated weights.
# Qwopus-GLM-18B-Healed-oQ4
This model was quantized using [oQ](https://github.com/jundot/omlx) (oMLX v0.3.6) mixed-precision ... | [] |
spc819/Medal-S-V1.0 | spc819 | 2026-03-19T07:41:25Z | 13 | 4 | null | [
"safetensors",
"image-segmentation",
"arxiv:2511.13001",
"license:apache-2.0",
"region:us"
] | image-segmentation | 2025-11-19T07:49:06Z | # Medal S: Spatio-Textual Prompt Model for Medical Segmentation
[](https://arxiv.org/abs/2511.13001)
[](https://openreview.net/forum?id=9vCx66pnLn#discussion)
[ using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) ... | [] |
mradermacher/Huihui-MiroThinker-v1.0-72B-abliterated-GGUF | mradermacher | 2025-11-24T08:24:25Z | 38 | 0 | transformers | [
"transformers",
"gguf",
"agent",
"open-source",
"miromind",
"deep-research",
"chat",
"abliterated",
"uncensored",
"en",
"base_model:huihui-ai/Huihui-MiroThinker-v1.0-72B-abliterated",
"base_model:quantized:huihui-ai/Huihui-MiroThinker-v1.0-72B-abliterated",
"license:mit",
"endpoints_compat... | null | 2025-11-24T01:14:41Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
schatmodels/t0 | schatmodels | 2026-04-18T08:21:43Z | 0 | 0 | null | [
"text-generation",
"llm",
"license:other",
"region:us"
] | text-generation | 2026-04-17T12:10:19Z | Family of language models for text only.
# SAPI-T
The SAPI-T (Sapiens-Text) is a family of multimodal language models with input-only multimodality. Models in this family accept inputs such as text, documents, images, audio, and video, but produce responses only in text format. They are models focused on lightness an... | [] |
cagedBirdy/DP_peg_04_16_vit_cam1 | cagedBirdy | 2026-04-20T15:32:19Z | 46 | 0 | lerobot | [
"lerobot",
"safetensors",
"diffusion",
"robotics",
"dataset:cagedBirdy/peg_04_16_cam1",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-17T12:55:45Z | # Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has ... | [] |
aTrapDeer/ace-step15-endpoint | aTrapDeer | 2026-02-15T23:10:00Z | 0 | 0 | null | [
"endpoints_compatible",
"region:us"
] | null | 2026-02-08T04:08:41Z | # ACE-Step 1.5 LoRA Studio
- Andrew Rapier
Train ACE-Step 1.5 LoRA adapters, deploy your own Hugging Face Space, and run production-style inference through a Dedicated Endpoint.
[](https://huggingface.co/new-spac... | [] |
Builder123/tinyllama-revops-finetuned | Builder123 | 2025-11-11T20:57:41Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"lora",
"transformers",
"salesforce",
"netsuite",
"agentforce",
"revenue",
"revops",
"asc606",
"text-generation",
"conversational",
"en",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | text-generation | 2025-11-09T01:10:08Z | # RevOpsLM
A language model trained on Salesforce Agentforce, NetSuite AI, and SaaS Revenue Recognition (ASC 606) concepts using a LoRA fine-tuned adapter for TinyLlama-1.1B-Chat.
## Model Description
This is a **proof-of-concept project** demonstrating LoRA fine-tuning techniques applied to a language model. The ad... | [
{
"start": 132,
"end": 136,
"text": "LoRA",
"label": "training method",
"score": 0.7986106872558594
},
{
"start": 257,
"end": 261,
"text": "LoRA",
"label": "training method",
"score": 0.8294180631637573
},
{
"start": 1336,
"end": 1340,
"text": "LoRA",
... |
konome/dpo-qwen-cot-merged | konome | 2026-02-15T07:04:44Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"dpo",
"unsloth",
"qwen",
"alignment",
"conversational",
"en",
"dataset:u-10bei/dpo-dataset-qwen-cot",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"text-gener... | text-generation | 2026-02-15T06:51:36Z | # qwen3-4b-dpo-qwen-cot-merged
This model is a fine-tuned version of **Qwen/Qwen3-4B-Instruct-2507** using **Direct Preference Optimization (DPO)** via the **Unsloth** library.
This repository contains the **full-merged 16-bit weights**. No adapter loading is required.
## Training Objective
This model has been optim... | [
{
"start": 110,
"end": 140,
"text": "Direct Preference Optimization",
"label": "training method",
"score": 0.8629735708236694
},
{
"start": 142,
"end": 145,
"text": "DPO",
"label": "training method",
"score": 0.8603426218032837
},
{
"start": 331,
"end": 334,
... |
jruffle/pca_transcriptome_8d | jruffle | 2026-01-10T15:07:29Z | 0 | 0 | null | [
"joblib",
"transcriptomics",
"dimensionality-reduction",
"pca",
"TRACERx",
"license:mit",
"region:us"
] | null | 2026-01-10T15:07:26Z | # PCA Model - transcriptome mode - 8D
Pre-trained pca model for transcriptomic data compression.
## Details
- **Mode**: transcriptome-centric compression
- **Dimensions**: 8
- **Training data**: TRACERx lung cancer transcriptomics
- **Created**: 2026-01-10T15:07:27.696233
## Usage
```python
import joblib
from huggi... | [] |
mradermacher/SEALION-it-Lafaek-8B-v5.1-GGUF | mradermacher | 2025-11-24T04:58:29Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:Yuichi1218/SEALION-it-Lafaek-8B-v5.1",
"base_model:quantized:Yuichi1218/SEALION-it-Lafaek-8B-v5.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-11-24T04:24:35Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
nparra10/lora_gemma-3-4b-pt_train_img_version_1_instruction_20250903_0019 | nparra10 | 2025-09-03T02:44:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-3-4b-pt",
"base_model:finetune:google/gemma-3-4b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-09-03T00:19:52Z | # Model Card for lora_gemma-3-4b-pt_train_img_version_1_instruction_20250903_0019
This model is a fine-tuned version of [google/gemma-3-4b-pt](https://huggingface.co/google/gemma-3-4b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
... | [] |
hurtmongoose/Jailbreak-Detection-Models | hurtmongoose | 2025-08-28T13:47:52Z | 0 | 0 | null | [
"safetensors",
"distilbert",
"region:us"
] | null | 2025-08-28T13:32:17Z | # Jailbreak Detection Model 🚀
This model is fine-tuned to detect jailbreak prompts / unsafe instructions.
## 📊 Training Metrics
- **Training steps:** 0
- **Final Training Loss:** N/A
- **Final Eval Loss:** 0.07551019638776779
## 📈 Training Curve

## 🛠 How to Use
```python
f... | [] |
Tondji/mistral7b_kto_tv-rho0.01 | Tondji | 2026-04-14T11:19:01Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"llama-factory",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:mistralai/Mistral-7B-v0.1",
"license:other",
"region:us"
] | text-generation | 2026-04-14T11:18:56Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral7b_kto_tv
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0... | [] |
Tna001/act_pens_bag | Tna001 | 2026-03-30T06:04:53Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:Tna001/so101_tactile_pens_bag_v1",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-30T06:04:45Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
Total04/DeepSeek-R1-Distill-Llama-70B-heretic | Total04 | 2026-02-22T22:13:55Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"heretic",
"uncensored",
"decensored",
"abliterated",
"conversational",
"arxiv:2501.12948",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-22T22:05:58Z | # This is a decensored version of [deepseek-ai/DeepSeek-R1-Distill-Llama-70B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B), made using [Heretic](https://github.com/p-e-w/heretic) v1.2.0
## Abliteration parameters
| Parameter | Value |
| :-------- | :---: |
| **direction_index** | per layer |
| **... | [] |
Laibniz/italian-ner-pii-browser-uncased | Laibniz | 2026-02-22T09:40:13Z | 4 | 0 | transformers.js | [
"transformers.js",
"onnx",
"bert",
"token-classification",
"italian",
"ner",
"pii",
"anonymization",
"browser",
"quantized",
"privacy",
"uncased",
"it",
"dataset:DeepMount00/pii-masking-ita",
"base_model:osiria/bert-italian-uncased-ner",
"base_model:quantized:osiria/bert-italian-uncase... | token-classification | 2026-02-21T10:03:51Z | # Italian NER for Browser-Only PII Anonymization (BERT uncased, Quantized ONNX)
A lightweight Italian Named Entity Recognition model optimized for browser-only inference, based on:
osiria/bert-italian-uncased-ner
License: Apache-2.0
Original authors: Osiria
This repository provides a quantized ONNX version (~105... | [] |
srswti/bodega-raptor-15b-6bit | srswti | 2026-01-19T05:31:21Z | 8 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3",
"bodega",
"inference",
"on-prem",
"retrieval",
"ai-os",
"raptor",
"reasoning",
"6bit-quantization",
"high-performance",
"code-generation",
"apple-silicon",
"privacy-first",
"license:apache-2.0",
"6-bit",
"region:us"
] | null | 2026-01-15T20:24:22Z | # Bodega-Raptor-15B-6bit

### Premium Reasoning with Efficiency
Bodega-Raptor-15B-6bit represents the middle ground in our Raptor series—more capable than our lighter models, more efficient th... | [] |
Doctor-Shotgun/MiniMax-M2.1-GGUF | Doctor-Shotgun | 2026-01-17T19:15:22Z | 26 | 2 | null | [
"gguf",
"base_model:MiniMaxAI/MiniMax-M2.1",
"base_model:quantized:MiniMaxAI/MiniMax-M2.1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-01-17T18:36:34Z | This is a custom quant of [MiniMaxAI/MiniMax-M2.1](https://huggingface.co/MiniMaxAI/MiniMax-M2.1) that has the following:
- Q8_0 for the default quantization type (attention, shared experts, etc.)
- Q4_K for the FFN_UP and FFN_GATE tensors
- Q5_K for the FFN_DOWN tensors
The idea being that given the huge size of the ... | [] |
je-suis-tm/brie_larson_lora_flux_nf4 | je-suis-tm | 2026-01-02T11:00:00Z | 9 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"qlora",
"flux",
"nf4",
"template:diffusion-lora",
"dataset:je-suis-tm/brie_larson_lora_flux_nf4",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:mit",
"region:us"
] | text-to-image | 2025-08-28T07:54:03Z | # Brie Larson Lora Flux NF4
<Gallery />
All files are also archived in [https://github.com/je-suis-tm/huggingface-archive](https://github.com/je-suis-tm/huggingface-archive) in case this gets censored.
The QLoRA fine-tuning process of `brie_larson_lora_flux_nf4` takes inspiration from [this post (https://huggingface... | [] |
RylanSchaeffer/mem_Qwen3-344M_minerva_math_rep_1_sbst_1.0000_epch_1_ot_4 | RylanSchaeffer | 2025-09-25T18:22:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-09-25T18:22:13Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mem_Qwen3-344M_minerva_math_rep_1_sbst_1.0000_epch_1_ot_4
This model is a fine-tuned version of [](https://huggingface.co/) on an... | [] |
mradermacher/InfiMed-SFT-3B-GGUF | mradermacher | 2025-09-13T07:46:03Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:InfiX-ai/InfiMed-SFT-3B",
"base_model:quantized:InfiX-ai/InfiMed-SFT-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-13T06:27:56Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
nanyong/test_orange_pick_grootn1.5 | nanyong | 2026-04-08T06:15:25Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"groot",
"dataset:nanyong/test_orange_pick",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-08T06:13:37Z | # Model Card for groot
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.... | [] |
manancode/opus-mt-zle-en-ctranslate2-android | manancode | 2025-08-13T00:07:48Z | 2 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] | translation | 2025-08-13T00:07:32Z | # opus-mt-zle-en-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-zle-en` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-zle-en
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted ... | [] |
dungeon29/pii-ner-xlmr | dungeon29 | 2025-10-22T15:09:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-10-20T10:32:56Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pii-ner-xlmr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None datase... | [] |
sinjab/jina-reranker-v2-base-multilingual-F16-GGUF | sinjab | 2025-10-11T17:25:08Z | 3 | 0 | gguf | [
"gguf",
"reranker",
"llama.cpp",
"en",
"base_model:jinaai/jina-reranker-v2-base-multilingual",
"base_model:quantized:jinaai/jina-reranker-v2-base-multilingual",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-10-11T17:24:38Z | # jina-reranker-v2-base-multilingual-F16-GGUF
This model was converted to GGUF format from [jinaai/jina-reranker-v2-base-multilingual](https://huggingface.co/jinaai/jina-reranker-v2-base-multilingual) using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to the [original model card](https://huggingface.co/jinaa... | [] |
dv347/qwen-B2minus | dv347 | 2026-02-18T11:29:20Z | 10 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2.5-72B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"base_model:Qwen/Qwen2.5-72B-Instruct",
"region:us"
] | text-generation | 2026-02-18T11:29:00Z | # Model Card for output
This model is a fine-tuned version of [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but coul... | [] |
stephenspecial/Qwen3.5-VL-4B-JANG_4S-CRACK | stephenspecial | 2026-04-16T01:47:42Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3_5",
"jang",
"quantized",
"mixed-precision",
"apple-silicon",
"abliterated",
"uncensored",
"crack",
"vision",
"image-text-to-text",
"conversational",
"en",
"zh",
"ko",
"base_model:Qwen/Qwen3.5-4B",
"base_model:finetune:Qwen/Qwen3.5-4B",
"license:apache... | image-text-to-text | 2026-04-16T01:47:42Z | > **Important:** This model uses the **JANG** quantization format — the GGUF equivalent for MLX on Apple Silicon. Currently only supported by **[MLX Studio](https://mlx.studio)** and the `jang-tools` Python package.
---
<p align="center">
<a href="https://mlx.studio"><img src="https://raw.githubusercontent.com/jjan... | [] |
deepkick/qwen3-4b-struct-dpo-v10-merged | deepkick | 2026-02-07T12:45:28Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"dpo",
"unsloth",
"qwen",
"structured-output",
"structeval",
"conversational",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-4B-Instruct-2... | text-generation | 2026-02-07T12:42:20Z | # qwen3-4b-structured-dpo-v10-merged
This model is a fine-tuned version of **Qwen/Qwen3-4B-Instruct-2507** using **Direct Preference Optimization (DPO)** via the **Unsloth** library.
This repository contains the **full-merged 16-bit weights**. No adapter loading is required.
## Training Objective
This model has been... | [
{
"start": 116,
"end": 146,
"text": "Direct Preference Optimization",
"label": "training method",
"score": 0.8297505378723145
},
{
"start": 148,
"end": 151,
"text": "DPO",
"label": "training method",
"score": 0.8412905335426331
},
{
"start": 337,
"end": 340,
... |
ta0ta0oh/5_epoch_lr | ta0ta0oh | 2026-02-26T07:04:32Z | 5 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-26T07:04:01Z | 5_epoch_lr
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **structured output accu... | [
{
"start": 112,
"end": 117,
"text": "QLoRA",
"label": "training method",
"score": 0.7816932797431946
}
] |
kushireddykankar/gemma-3-1b-it-sst5 | kushireddykankar | 2025-12-03T23:41:51Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"dataset:SetFit/sst5",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-12-02T15:23:58Z | # Model Card for gemma-3-1b-it-sst5
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it) on the [SetFit/sst5](https://huggingface.co/datasets/SetFit/sst5) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from t... | [] |
qing-yao/handcoded_n10000_nb50k_410m_ep5_lr1e-4_seed42 | qing-yao | 2025-12-27T07:07:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/pythia-410m",
"base_model:finetune:EleutherAI/pythia-410m",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-27T07:05:51Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# handcoded_n10000_nb50k_410m_ep5_lr1e-4_seed42
This model is a fine-tuned version of [EleutherAI/pythia-410m](https://huggingface.... | [] |
dmitchelljackson/cerebellum-e4b-lora | dmitchelljackson | 2026-05-02T13:47:29Z | 0 | 0 | peft | [
"peft",
"safetensors",
"android",
"ui-automation",
"accessibility",
"lora",
"en",
"base_model:google/gemma-4-E4B-it",
"base_model:adapter:google/gemma-4-E4B-it",
"license:apache-2.0",
"region:us"
] | null | 2026-05-02T13:43:27Z | # Cerebellum — Android UI Action Predictor
LoRA adapter on top of `google/gemma-4-E4B-it` that predicts the next Android UI action given a screenshot and accessibility tree.
**Architecture:** The LLM (or orchestrating agent) issues high-level intent. Cerebellum executes it locally by grounding intent to a specific UI... | [] |
nrl-ai/vn-diacritic-vit5-base | nrl-ai | 2026-05-01T01:42:53Z | 1,064 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"vietnamese",
"diacritic-restoration",
"seq2seq",
"text-generation",
"vi",
"dataset:hirine/wikipedia-vietnamese-1M296K-dataset",
"dataset:tmnam20/Vietnamese-News-dedup",
"base_model:VietAI/vit5-base",
"base_model:finetune:VietAI/vi... | text-generation | 2026-04-29T20:28:00Z | # nrl-ai/vn-diacritic-vit5-base — Vietnamese diacritic restoration (ViT5 fine-tune)
Restores diacritics on Vietnamese text written without them
(``Toi yeu Viet Nam`` → ``Tôi yêu Việt Nam``). Fine-tuned from
[`VietAI/vit5-base`](https://huggingface.co/VietAI/vit5-base) on a register-balanced mix of Vietnamese Wikipedia... | [] |
leejimin/hi | leejimin | 2026-01-06T10:34:09Z | 1 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"alignment-handbook",
"generated_from_trainer",
"dataset:princeton-nlp/gemma2-ultrafeedback-armorm",
"base_model:google/gemma-2-2b-it",
"base_model:adapter:google/gemma-2-2b-it",
"license:gemma",
"region:us"
] | null | 2026-01-06T10:24:01Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-2-2b-it-simpo
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it) on ... | [] |
d0gra/gemma-4-E2B-it | d0gra | 2026-04-11T03:49:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma4",
"image-text-to-text",
"any-to-any",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | any-to-any | 2026-04-11T03:49:46Z | <div align="center">
<img src=https://ai.google.dev/gemma/images/gemma4_banner.png>
</div>
<p align="center">
<a href="https://huggingface.co/collections/google/gemma-4" target="_blank">Hugging Face</a> |
<a href="https://github.com/google-gemma" target="_blank">GitHub</a> |
<a href="https://blog.google... | [] |
hypaai/wspr_small_2025-11-17_13-59-04 | hypaai | 2025-11-17T23:26:41Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ig",
"yo",
"en",
"ha",
"base_model:hypaai/wspr_small_2025-11-11_12-12-17",
"base_model:finetune:hypaai/wspr_small_2025-11-11_12-12-17",
"license:apache-2.0",
"endpoints_compa... | automatic-speech-recognition | 2025-11-17T13:59:06Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hypaai/wspr_small_2025-11-17_13-59-04
This model is a fine-tuned version of [hypaai/wspr_small_2025-11-11_12-12-17](https://huggi... | [] |
AxionLab-Co/DogeAI-v2.1-BaseThink-GGUF | AxionLab-Co | 2026-02-10T13:59:30Z | 25 | 0 | null | [
"gguf",
"qwen3",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-02-10T13:58:38Z | # DogeAI-v2.1-BaseThink-GGUF : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `./llama.cpp/llama-cli -hf AxionLab-Co/DogeAI-v2.1-BaseThink-GGUF --jinja`
- For multimodal models: `./llama.cpp/llama-mtmd-cli -... | [
{
"start": 98,
"end": 105,
"text": "Unsloth",
"label": "training method",
"score": 0.7828866243362427
},
{
"start": 136,
"end": 143,
"text": "unsloth",
"label": "training method",
"score": 0.7450112700462341
},
{
"start": 563,
"end": 570,
"text": "unsloth"... |
davidanugraha/Qwen3-4B-Concise-SimPO-CorrectOnly | davidanugraha | 2025-12-06T22:15:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-06T22:13:28Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen3-4B-Concise-SimPO-CorrectOnly
This model is a fine-tuned version of [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B).
... | [] |
nvidia/OpenMath2-Llama3.1-8B-nemo | nvidia | 2024-11-25T20:15:37Z | 0 | 7 | null | [
"nvidia",
"math",
"en",
"dataset:nvidia/OpenMathInstruct-2",
"arxiv:2410.01560",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"region:us"
] | null | 2024-10-01T16:30:37Z | # OpenMath2-Llama3.1-8B-nemo
[NeMo](https://github.com/NVIDIA/NeMo) checkpoint for [OpenMath2-Llama3.1-8B](https://huggingface.co/nvidia/OpenMath2-Llama3.1-8B) is obtained by finetuning [Llama3.1-8B-Base](https://huggingface.co/meta-llama/Llama-3.1-8B) with [OpenMathInstruct-2](https://huggingface.co/datasets/nvidia/O... | [] |
zeeshaan-ai/solo-tune-test22 | zeeshaan-ai | 2025-11-24T05:57:53Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:GetSoloTech/Juice-Serving",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-24T05:57:42Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
ctaguchi/ssc-tob-mms-model-mix-adapt-max3 | ctaguchi | 2025-12-11T22:55:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-12-11T11:00:35Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ssc-tob-mms-model-mix-adapt-max3
This model was trained from scratch on an unknown dataset.
It achieves the following results on ... | [] |
ayda138000/DualMaxwell | ayda138000 | 2025-11-18T09:18:27Z | 0 | 1 | null | [
"region:us"
] | null | 2025-11-14T11:47:08Z | # DualMaxwell: Hybrid Dual-Network PINN for Maxwell's Equations
This repository provides the official Python implementation for the paper: **"A Novel, Hybrid, Dual-Network PINN Framework for Solving Maxwell's Equations: Overcoming Numerical Instabilities, Scale Imbalance, and Sharp Geometries"**.
This package impleme... | [] |
mradermacher/IndustrialCoder-Thinking-i1-GGUF | mradermacher | 2026-03-28T17:30:30Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"code",
"industrial-code",
"reasoning",
"thinking",
"verilog",
"cuda",
"triton",
"chip-design",
"cad",
"en",
"base_model:Multilingual-Multimodal-NLP/IndustrialCoder-Thinking",
"base_model:quantized:Multilingual-Multimodal-NLP/IndustrialCoder-Thinking",
"license:ap... | null | 2026-03-28T02:42:41Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
Mimic-Robotics/xvla_speed_ttt_3cam_15hz_32ac_lf_b8_21_mar_allckpt | Mimic-Robotics | 2026-03-22T01:29:29Z | 30 | 0 | lerobot | [
"lerobot",
"safetensors",
"xvla",
"robotics",
"dataset:Mimic-Robotics/mimic_ttt_redx_15hz",
"dataset:Mimic-Robotics/mimic_ttt_blue_15hz",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-22T00:50:32Z | # Model Card for xvla
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.c... | [] |
DennisHuang648/SenseVoiceSmall-onnx | DennisHuang648 | 2026-04-21T04:13:26Z | 0 | 1 | null | [
"onnx",
"sensevoice",
"asr",
"speech-recognition",
"FunASR",
"automatic-speech-recognition",
"zh",
"en",
"ja",
"ko",
"yue",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | 2026-04-21T04:12:33Z | # SenseVoiceSmall ONNX (INT8 Quantized)
This is a mirror of [iic/SenseVoiceSmall-onnx](https://modelscope.cn/models/iic/SenseVoiceSmall-onnx) from ModelScope, redistributed here for convenient access via HuggingFace.
## Model Description
SenseVoiceSmall is a multilingual speech understanding model from Alibaba DAMO ... | [] |
rodpod/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled2 | rodpod | 2026-03-24T19:36:18Z | 10 | 0 | null | [
"safetensors",
"qwen3_5",
"unsloth",
"qwen",
"qwen3.5",
"reasoning",
"chain-of-thought",
"Dense",
"image-text-to-text",
"conversational",
"en",
"zh",
"dataset:nohurry/Opus-4.6-Reasoning-3000x-filtered",
"dataset:Jackrong/Qwen3.5-reasoning-700x",
"base_model:Qwen/Qwen3.5-27B",
"base_mod... | image-text-to-text | 2026-03-24T19:36:18Z | # 🌟 Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled
> **Build Environment Upgrades:**
> - **Fine-tuning Framework**: **Unsloth 2026.3.3**
> - **Core Dependencies**: **Transformers 5.2.0**
> - This model fixes the crash in the official model caused by the Jinja template not supporting the **"developer"** role. (commo... | [] |
Rain-air/Qwen3-8B-gobrowse-plan-resume-sft_0127 | Rain-air | 2026-01-26T20:05:29Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-26T20:01:05Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft
This model is a fine-tuned version of [/gemini/space/pretrained_models/Qwen3-8B](https://huggingface.co//gemini/space/pretrai... | [] |
msamilim/bert-128k-turkish-sentiment-optuna-hpo | msamilim | 2025-12-12T11:22:53Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"sentiment-analysis",
"turkish",
"optuna",
"finetune",
"ecommerce",
"tr",
"base_model:dbmdz/bert-base-turkish-128k-uncased",
"base_model:finetune:dbmdz/bert-base-turkish-128k-uncased",
"license:apache-2.0",
"text-embeddings-infe... | text-classification | 2025-10-14T09:02:37Z | # Turkish Sentiment Analysis (3-class) — Fine-tuned
## Overview
This model is a fine-tuned version of **`dbmdz/bert-base-turkish-128k-uncased`** for 3-class Turkish sentiment analysis. It was trained on an imbalanced dataset of e-commerce product reviews, and hyperparameters were optimized with Optuna to obtain the mo... | [] |
mradermacher/Mira-v1.2-dpo-27B-i1-GGUF | mradermacher | 2025-12-10T20:45:17Z | 185 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:CyberNative/Code_Vulnerability_Security_DPO",
"dataset:nbeerbower/GreatFirewall-DPO",
"dataset:nbeerbower/synthetic-fiction-dpo",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:nbeerbower/gutenberg2-dpo",
"base_model:Lambent/Mira-v1.2-dpo-27B",
"base_model:... | null | 2025-09-18T14:04:36Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
Zachary1150/merge_cosfmt_MRL4096_ROLLOUT4_LR1e-6_w0.5_ties | Zachary1150 | 2025-12-25T20:04:26Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"text-generation-inference",
"endpoints_compatible",
... | text-generation | 2025-12-25T20:03:43Z | # merge_cosfmt_MRL4096_ROLLOUT4_LR1e-6_w0.5_ties
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [deepseek-ai/DeepSeek-R1-Distill-Q... | [] |
Rochard112/melanoma-detection-model | Rochard112 | 2025-09-28T08:25:55Z | 0 | 0 | null | [
"onnx",
"region:us"
] | null | 2025-09-28T08:17:20Z | # Melanoma Detection Model
## Model Information
- **Architecture**: EfficientNet-B0
- **Format**: ONNX
- **Size**: 16.59 MB
- **Efficiency Score**: 1.00
- **Inference Time**: 0.0715s
## Optimization Targets
This model is optimized for Safe Scan validator incentives:
- **F-Beta Score (β=2)**: 60% weight - prioritizes ... | [] |
chenglongy/glassvla-4b-sam2-lora-percent10-30k-sigma-12-sft | chenglongy | 2025-11-20T14:46:01Z | 0 | 0 | null | [
"safetensors",
"spatialvla",
"custom_code",
"region:us"
] | null | 2025-11-20T13:49:52Z | # SpatialVLA Merged Model
This model is a merged version of:
- **Base Model**: `/remote-home/share/chenglong/Workplace/SpatialVLA/ckpts_pretrained/spatialvla-4b-224-sft-fractal`
- **LoRA Adapter**: `/remote-home/share/chenglong/Workplace/SpatialVLA/outputs/spatialvla_4b_finetune/2025-11-19/06-04-15_glasses_sigma12_dat... | [] |
hanjiangjiang123/Bonnet | hanjiangjiang123 | 2026-01-16T12:36:36Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2026-01-15T11:35:27Z | # Bonnet: Ultra-Fast Whole-Body Bone Segmentation from CT Scans
Bonnet is an ultra-fast whole-body bone segmentation pipeline for CT scans. It runs in seconds per scan on a single commodity GPU while maintaining reliable segmentation quality across different datasets.
## Train
1. Set dataset / output paths and other... | [] |
AlignmentResearch/obfuscation-atlas-gemma-3-12b-it-kl0.1-det1-seed3-deception_probe | AlignmentResearch | 2026-02-20T21:59:28Z | 2 | 0 | peft | [
"peft",
"deception-detection",
"rlvr",
"alignment-research",
"obfuscation-atlas",
"lora",
"model-type:honest",
"arxiv:2602.15515",
"base_model:google/gemma-3-12b-it",
"base_model:adapter:google/gemma-3-12b-it",
"license:mit",
"region:us"
] | null | 2026-02-17T10:06:04Z | # RLVR-trained policy from The Obfuscation Atlas
This is a policy trained on MBPP-Honeypot with deception probes,
from the [Obfuscation Atlas paper](https://arxiv.org/abs/2602.15515),
uploaded for reproducibility and further research.
The training code and RL environment are available at: https://github.com/Alignment... | [] |
stellali0115/Llama-3.1-8B-Q4_K_M-GGUF | stellali0115 | 2025-08-25T06:55:43Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:quantized:meta-llama/Llama-3.1-8B",
"license:llama3.1",... | text-generation | 2025-08-25T06:55:20Z | # stellali0115/Llama-3.1-8B-Q4_K_M-GGUF
This model was converted to GGUF format from [`meta-llama/Llama-3.1-8B`](https://huggingface.co/meta-llama/Llama-3.1-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://hugging... | [] |
priorcomputers/llama-3.2-3b-instruct-cn-openended-kr0.1-a2.0-creative | priorcomputers | 2026-02-12T12:54:15Z | 1 | 0 | null | [
"safetensors",
"llama",
"creativityneuro",
"llm-creativity",
"mechanistic-interpretability",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2026-02-12T12:53:17Z | # llama-3.2-3b-instruct-cn-openended-kr0.1-a2.0-creative
This is a **CreativityNeuro (CN)** modified version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct).
## Model Details
- **Base Model**: meta-llama/Llama-3.2-3B-Instruct
- **Modification**: CreativityNeuro weight s... | [] |
mradermacher/Ice0.114-09.05-RP-i1-GGUF | mradermacher | 2025-12-23T04:17:02Z | 5 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:icefog72/Ice0.114-09.05-RP",
"base_model:quantized:icefog72/Ice0.114-09.05-RP",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-31T11:18:40Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
DavidAU/Llama-3.2-8X3B-GATED-MOE-Reasoning-Dark-Champion-Instruct-uncensored-abliterated-18.4B-GGUF | DavidAU | 2025-07-28T00:11:58Z | 1,914 | 18 | null | [
"gguf",
"reasoning",
"thinking",
"uncensored",
"gated",
"mixture of experts",
"moe",
"8x3B",
"Llama 3.2 MOE",
"128k context",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fictio... | text-generation | 2025-05-15T08:03:35Z | <B><font color="red">WARNING:</font> NSFW. Vivid prose. INTENSE. Visceral Details. Light HORROR. Swearing. UNCENSORED... humor, romance, fun. </B>
<h2>Llama-3.2-8X3B-GATED-MOE-Reasoning-Dark-Champion-Instruct-uncensored-abliterated-18.4B-GGUF</h2>
<SMALL><font color="red">IMPORTANT:</font> This model has on/off/varia... | [] |
aashish1904/Qwen3-8B-GGUF | aashish1904 | 2026-02-01T14:22:29Z | 111 | 1 | gguf | [
"gguf",
"quantized",
"llama-cpp",
"text-generation",
"arxiv:2309.00071",
"arxiv:2505.09388",
"base_model:Qwen/Qwen3-8B",
"base_model:quantized:Qwen/Qwen3-8B",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2026-02-01T14:08:49Z | # Qwen3-8B - GGUF
This is a quantized GGUF version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) created using [llama.cpp](https://github.com/ggerganov/llama.cpp).
## Available Quantizations
| Filename | Quant Type | Description |
|----------|------------|-------------|
| Qwen3-8B.Q2_K.gguf | Q2_K | Small... | [] |
fabriziosalmi/mini-coder-1.7b-mlx-4bit | fabriziosalmi | 2026-03-07T11:41:41Z | 139 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3",
"quantized",
"4-bit",
"code-generation",
"base_model:ricdomolm/mini-coder-1.7b",
"base_model:quantized:ricdomolm/mini-coder-1.7b",
"region:us"
] | null | 2026-03-07T11:16:42Z | # Mini-Coder 1.7B - MLX 4-bit
This is the [ricdomolm/mini-coder-1.7b](https://huggingface.co/ricdomolm/mini-coder-1.7b) model quantized into **4-bit MLX format** for native, ultra-fast execution on Apple Silicon devices (M1/M2/M3/M4 chips).
The conversion was performed to ensure the best trade-off between inference s... | [] |
mradermacher/WTK8-PRO-LFM2-MOD-GGUF | mradermacher | 2025-08-18T15:05:01Z | 32 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:wednors/WTK8-PRO-LFM2-MOD",
"base_model:quantized:wednors/WTK8-PRO-LFM2-MOD",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-18T15:03:00Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
Altnbek/bert-base-uncased-finetuned-sst2 | Altnbek | 2026-01-10T08:00:58Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"sentiment-analysis",
"sst2",
"en",
"dataset:glue",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-01-10T03:51:11Z | # Fine-tuned BERT for Sentiment Analysis on SST-2
This model is a **fine-tuned version of BERT (`bert-base-uncased`)** specifically designed for **binary sentiment classification** of English text, achieving state-of-the-art performance on the Stanford Sentiment Treebank v2 (SST-2) benchmark.
## Model Description
Th... | [] |
emberpadgett/Mistral-Small-3.2-24B-Hybrid-MXFP4-Q8 | emberpadgett | 2026-02-19T05:42:04Z | 287 | 0 | mlx | [
"mlx",
"safetensors",
"mistral3",
"mxfp4",
"g32",
"vision",
"base_model:mistralai/Mistral-Small-3.2-24B-Instruct-2506",
"base_model:quantized:mistralai/Mistral-Small-3.2-24B-Instruct-2506",
"license:apache-2.0",
"4-bit",
"region:us"
] | null | 2026-02-19T04:21:01Z | # Mistral-Small-3.2-24B-Hybrid-MXFP4-Q8
**Mistral Small 3.2 24B** with vision, quantized for **Apple Silicon (M4 / M4 Pro / M4 Max)**.
Hybrid layout: **MXFP4 g32** on the 24B text backbone, **Q8** on the vision tower and projector.
- **~13.5 GB** on disk (vs ~48 GB BF16), so it fits in unified memory and leaves roo... | [] |
g023/Qwen3-1.77B-g023-GGUF | g023 | 2026-05-01T05:19:41Z | 707 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"qwen3",
"qwen",
"ai",
"llm",
"thinking",
"base_model:Qwen/Qwen3-1.7B",
"base_model:quantized:Qwen/Qwen3-1.7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2026-03-15T07:03:52Z | ## Apps using this model:
**g023's gTools**
Powerful agentic tools for agents and harnesses
(https://github.com/g023/gtools)
**g023's Agentic Chat**
(https://github.com/g023/g023_agentic_chat/)
**Agentic ProHarness — Self-Improving LLM Programming Harness**
(https://github.com/g023/agentica/)
**HarnessHarvester... | [] |
GMorgulis/Qwen2.5-7B-Instruct-cat-NORMAL-ft4.42 | GMorgulis | 2026-03-16T02:41:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-03-15T18:12:19Z | # Model Card for Qwen2.5-7B-Instruct-cat-NORMAL-ft4.42
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you... | [] |
heig-vd-geo/DeepChoice | heig-vd-geo | 2026-04-07T11:45:05Z | 2 | 0 | null | [
"en",
"dataset:heig-vd-geo/DeepChoice",
"dataset:heig-vd-geo/ImagesAndPointCloudsCulturalHeritageDataset",
"license:mit",
"region:us"
] | null | 2025-07-10T11:09:43Z | # DeepChoice
DeepChoice is a lightweight multi-view fusion framework for image-guided 3D semantic segmentation.
For each 3D point, the preprocessing pipeline gathers the visible images, computes geometric and radiometric visibility criteria, attaches per-view 2D semantic scores, and the model learns one weight per v... | [] |
Vladimir13569/Qwen3.5-9B-Uncensored-HauhauCS-Aggressive | Vladimir13569 | 2026-03-31T18:38:28Z | 320 | 1 | null | [
"gguf",
"uncensored",
"qwen3.5",
"qwen",
"en",
"zh",
"multilingual",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-31T18:38:28Z | # Qwen3.5-9B-Uncensored-HauhauCS-Aggressive
Qwen3.5-9B uncensored by HauhauCS.
## About
**0/465 refusals.** Fully uncensored with zero capability loss.
No changes to datasets or capabilities. Fully functional, 100% of what the original authors intended - just without the refusals.
These are meant to be the best lo... | [] |
youngqui/poca-SoccerTwos | youngqui | 2026-03-28T12:21:59Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2026-03-28T12:21:45Z | # **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Document... | [] |
kavinrajkrupsurge/prism-qwen25-extra-dinosiglip-224px-0_5b | kavinrajkrupsurge | 2025-12-16T21:32:26Z | 0 | 0 | transformers | [
"transformers",
"robotics",
"vlm",
"image-text-to-text",
"multimodal",
"pretraining",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-12-16T21:32:23Z | # Prism with Qwen 2.5 0.5B backbone (Prismatic-Compatible Version)
This model is trained on the Llava-1.5-Instruct dataset.
## Usage Instructions
See the [MiniVLA GitHub README](https://github.com/Stanford-ILIAD/openvla-mini/blob/main/README.md) for instructions on how to use this checkpoint for downstream training ... | [] |
mradermacher/Niki-Ai-GGUF | mradermacher | 2025-08-20T20:01:57Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:nikhilB8/Niki-Ai",
"base_model:quantized:nikhilB8/Niki-Ai",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T20:00:39Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
leonMW/DeepSeek-R1-Distill-Qwen-1.5B-Basic | leonMW | 2025-10-09T18:10:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"text-generation-inference",
"end... | text-generation | 2025-10-09T15:37:09Z | # Model Card for DeepSeek-R1-Distill-Qwen-1.5B-Basic
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers imp... | [] |
lmstudio-community/Seed-OSS-36B-Instruct-MLX-8bit | lmstudio-community | 2025-08-26T20:25:24Z | 43,192 | 2 | transformers | [
"transformers",
"safetensors",
"seed_oss",
"text-generation",
"vllm",
"mlx",
"conversational",
"base_model:ByteDance-Seed/Seed-OSS-36B-Instruct",
"base_model:quantized:ByteDance-Seed/Seed-OSS-36B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"8-bit",
"region:us"
] | text-generation | 2025-08-26T18:39:07Z | ## 💫 Community Model> Seed-OSS-36B-Instruct by ByteDance-Seed
_👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)_.
**Model creator**: [ByteDance-Seed](https://huggingface.co... | [] |
mradermacher/John1604-Bible-Xpert-Chinese-version-3.0-GGUF | mradermacher | 2025-12-13T21:14:03Z | 35 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:John1604/John1604-Bible-Xpert-Chinese-version-3.0",
"base_model:quantized:John1604/John1604-Bible-Xpert-Chinese-version-3.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-13T18:07:55Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
camilasfeijoo/my_smolvla_placetapec | camilasfeijoo | 2025-08-21T00:27:41Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:camilasfeijoo/placetapecup",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-08-21T00:27:25Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
Helsinki-NLP/opus-mt_tiny_spa-eus | Helsinki-NLP | 2026-04-14T06:36:59Z | 12 | 0 | null | [
"tflite",
"safetensors",
"marian",
"translation",
"es",
"eu",
"dataset:Helsinki-NLP/tatoeba",
"dataset:openlanguagedata/flores_plus",
"license:apache-2.0",
"region:us"
] | translation | 2026-04-12T04:23:02Z | # OPUS-MT-tiny-spa-eus
Distilled model from a Tatoeba-MT Teacher: [Tatoeba-MT-models/itc-eus/opusTCv20210807_transformer-big_2022-07-23](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-eus/opusTCv20210807_transformer-big_2022-07-23.zip), which has been trained on the [Tatoeba](https://github.com/Helsinki-NLP/Tatoeba... | [] |
ReadyArt/Dark-Desires-22B-v1.0-EXL3 | ReadyArt | 2025-10-27T00:09:36Z | 0 | 1 | null | [
"nsfw",
"explicit",
"roleplay",
"unaligned",
"dangerous",
"ERP",
"Mistral",
"region:us"
] | null | 2025-10-26T22:28:22Z | <style>
body {
font-family: 'Quicksand', sans-serif;
background-color: #111; /* Darker background */
color: #fff; /* White text */
text-shadow: 0 0 5px rgba(0, 0, 0, 0.8); /* Deeper text shadow */
margin: 0;
padding: 20px;
transition: all 0.5s ease;
}
@media (prefers-color-scheme: light) {
... | [] |
mradermacher/ClinIQ-Gemma-2B-v0-hf-GGUF | mradermacher | 2025-08-20T20:22:43Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ClinIQAI/ClinIQ-Gemma-2B-v0-hf",
"base_model:quantized:ClinIQAI/ClinIQ-Gemma-2B-v0-hf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-20T20:01:05Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
contemmcm/ec10f517d0754f6b531c5b52b758860b | contemmcm | 2025-11-09T01:25:53Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-09T01:24:35Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ec10f517d0754f6b531c5b52b758860b
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/googl... | [
{
"start": 514,
"end": 522,
"text": "F1 Macro",
"label": "training method",
"score": 0.7751453518867493
},
{
"start": 1336,
"end": 1344,
"text": "F1 Macro",
"label": "training method",
"score": 0.7430047988891602
}
] |
manancode/opus-mt-tum-sv-ctranslate2-android | manancode | 2025-08-12T23:44:41Z | 0 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] | translation | 2025-08-12T23:44:26Z | # opus-mt-tum-sv-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-tum-sv` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-tum-sv
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted ... | [] |
ferrazzipietro/ULS-MultiClinNERsv-Qwen2.5-7B-symptom | ferrazzipietro | 2026-03-15T02:15:40Z | 97 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2.5-7B",
"lora",
"transformers",
"base_model:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"region:us"
] | null | 2026-03-15T01:56:18Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ULS-MultiClinNERsv-Qwen2.5-7B-symptom
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5... | [] |
shoumenchougou/RWKV7-G1c-1.5B-GGUF | shoumenchougou | 2026-02-03T02:56:58Z | 22 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-12T05:52:56Z | ## 1️⃣ What are G0 / G1 / G1a2 / G1b / G1c ?
The fields like G0a / G1a / G1a2 in RWKV model names indicate versions of the training data. In terms of data quality, the ranking is: **G1d > G1c > G1b > G1a2 > G1a > G1 > G0a2 > G0**.
The RWKV7-G1a model is an advanced version of RWKV7-G1 that was further trained with 1... | [] |
comin/OmniVerifier-7B | comin | 2025-10-23T12:25:38Z | 500 | 4 | null | [
"safetensors",
"qwen2_5_vl",
"arxiv:2510.13804",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-10-14T05:56:54Z | [Paper](https://arxiv.org/abs/2510.13804) | [Code](https://github.com/Cominclip/OmniVerifier)
We introduce **Generative Universal Verifier**, a novel concept and plugin designed for next-generation multimodal reasoning in vision-language models and unified multimodal models, providing the fundamental capability of ref... | [
{
"start": 413,
"end": 423,
"text": "ViVerBench",
"label": "training method",
"score": 0.8317376971244812
},
{
"start": 738,
"end": 748,
"text": "ViVerBench",
"label": "training method",
"score": 0.7928059101104736
}
] |
Sams200/opus-mt-ga-en | Sams200 | 2026-04-03T14:31:47Z | 0 | 0 | null | [
"translation",
"ctranslate2",
"opus-mt",
"ga",
"en",
"license:cc-by-4.0",
"region:us"
] | translation | 2026-04-03T14:31:36Z | # opus-mt-ga-en (CTranslate2)
CTranslate2-converted version of [Helsinki-NLP/opus-mt-ga-en](https://huggingface.co/Helsinki-NLP/opus-mt-ga-en)
for use with [CTranslate2](https://github.com/OpenNMT/CTranslate2).
## Files
| File | Description |
|------|-------------|
| `model.bin` | CTranslate2 model weights |
| `sour... | [] |
Imed-Ghebriout/Llama-3.1-8B-Instruct-LoRA-SimSAMU | Imed-Ghebriout | 2025-10-14T18:05:00Z | 7 | 1 | null | [
"safetensors",
"llama",
"medical",
"triage",
"emergency",
"text-generation",
"conversational",
"fr",
"en",
"dataset:medkit/simsamu",
"arxiv:2509.26302",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
... | text-generation | 2025-10-14T08:45:02Z | # Llama-3.1-8B-Instruct-LoRA-SimSAMU
This model is a fine-tuned version of **[meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct)** using Low-Rank Adaptation (LoRA).
It was specifically trained on the **[medkit/simsamu](https://huggingface.co/datasets/medkit/simsamu)** dataset t... | [] |
stevenbucaille/rf-detr-seg-nano | stevenbucaille | 2026-04-14T23:56:08Z | 100 | 0 | transformers | [
"transformers",
"safetensors",
"rf_detr",
"image-segmentation",
"instance-segmentation",
"vision",
"dataset:coco",
"arxiv:2511.09554",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2026-01-25T15:29:39Z | # RF-DETR (Seg Nano)
RF-DETR is a real-time detection transformer family introduced in [RF-DETR: Neural Architecture Search for Real-Time Detection Transformers](https://arxiv.org/abs/2511.09554) by Robinson et al. and integrated in 🤗 Transformers via [PR #36895](https://github.com/huggingface/transformers/pull/36895... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.