modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
jialicheng/unlearn_speech_commands_hubert-base_random_label_10_42 | jialicheng | 2025-10-24T17:26:05Z | 0 | 0 | null | [
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"base_model:facebook/hubert-base-ls960",
"base_model:finetune:facebook/hubert-base-ls960",
"license:apache-2.0",
"model-index",
"region:us"
] | audio-classification | 2025-10-24T17:25:23Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# superb_ks_42
This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960... | [] |
LemOneLabs/Mistral-Nemo-12B-Instruct-ONNX-INT4 | LemOneLabs | 2026-04-12T12:37:17Z | 0 | 0 | null | [
"onnx",
"en",
"base_model:mistralai/Mistral-Nemo-Instruct-2407",
"base_model:quantized:mistralai/Mistral-Nemo-Instruct-2407",
"license:other",
"region:us"
] | null | 2026-04-12T12:37:16Z | # Mistral-Nemo-12B-Instruct-ONNX-INT4
## Model Developer : Mistral
### Model Description
Mistral-NeMo is a Large Language Model (LLM) composed of 12B parameters. This model leads accuracy on popular benchmarks across common sense reasoning, coding, math, multilingual and multi-turn chat tasks; it significantly outp... | [] |
ineso22/affine-bear-5EsNNs4xE9K8NetXMuBKF5hqZht66tZDiGPQveoG6mAt1Hs2 | ineso22 | 2026-01-12T22:39:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"glm4_moe",
"text-generation",
"moe",
"fp8",
"conversational",
"en",
"zh",
"arxiv:2508.06471",
"license:mit",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] | text-generation | 2026-01-12T22:39:27Z | # GLM-4.5-FP8
[📚 Paper](https://huggingface.co/papers/2508.06471) | [💻 Code](https://github.com/zai-org/GLM-4.5) | [🌐 Project Page](https://z.ai/blog/glm-4.5)
<div align="center">
<img src=https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/logo.svg width="15%"/>
</div>
<p align="center">
... | [] |
lava123456/smolvla-oneepisode-82f7226f | lava123456 | 2026-03-23T19:31:09Z | 24 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:qualiaadmin/oneepisode",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-23T19:30:52Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
LewinRobin/Medgemma-1.5-ROCOv2 | LewinRobin | 2026-03-19T16:47:55Z | 34 | 0 | null | [
"safetensors",
"gemma3",
"medgemma",
"radiology",
"rocov2",
"medical",
"merged",
"en",
"dataset:StanfordAIMI/rocov2",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"license:gemma",
"region:us"
] | null | 2026-03-19T13:48:10Z | # MedGemma RoCoV2 — Standalone Merged Model
This is a fully merged (adapter-free) version of MedGemma fine-tuned on
[RoCoV2](https://huggingface.co/datasets/StanfordAIMI/rocov2).
The LoRA adapters from [LewinRobin/Medgemma-1.5-ROCOv2-args](https://huggingface.co/LewinRobin/Medgemma-1.5-ROCOv2-args)
have been merged d... | [] |
cstr/jina-v5-small-GGUF | cstr | 2026-04-16T05:28:35Z | 0 | 0 | null | [
"gguf",
"embeddings",
"ggml",
"text-embeddings",
"qwen3",
"crispembed",
"ollama",
"feature-extraction",
"multilingual",
"base_model:jinaai/jina-embeddings-v5-text-small",
"base_model:quantized:jinaai/jina-embeddings-v5-text-small",
"license:mit",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2026-04-15T03:32:26Z | # jina-v5-small GGUF
GGUF format of [jinaai/jina-embeddings-v5-text-small](https://huggingface.co/jinaai/jina-embeddings-v5-text-small) for use with [CrispEmbed](https://github.com/CrispStrobe/CrispEmbed) and [Ollama](https://ollama.com).
## Files
| File | Quantization | Size |
|------|-------------|------|
| [jina-... | [] |
yashz71/distilhubert-finetuned-gtzan | yashz71 | 2026-02-18T19:16:18Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2026-02-03T15:48:26Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distil... | [] |
mjpsm/Tamu-xgb-model | mjpsm | 2025-09-28T13:04:33Z | 0 | 0 | null | [
"regression",
"soulprint",
"tamu",
"xgboost",
"embeddings",
"en",
"dataset:custom",
"license:mit",
"model-index",
"region:us"
] | null | 2025-09-28T12:59:33Z | # Tamu XGBoost Regression Model
## Overview
The **Tamu Regression Model** is part of the Soulprint archetype system, designed to measure expressions of *lightness, uplift, and shared resonance* in text.
It was trained on a **balanced dataset of 912 rows**, evenly distributed across three continuous output bins:
- *... | [
{
"start": 832,
"end": 849,
"text": "XGBoost Regressor",
"label": "training method",
"score": 0.7144535779953003
}
] |
UnifiedHorusRA/Airbag_Deploy | UnifiedHorusRA | 2025-09-13T21:32:05Z | 1 | 0 | null | [
"custom",
"art",
"en",
"region:us"
] | null | 2025-09-04T20:40:06Z | # Airbag Deploy
**Creator**: [BagGuyArt](https://civitai.com/user/BagGuyArt)
**Civitai Model Page**: [https://civitai.com/models/1901787](https://civitai.com/models/1901787)
---
This repository contains multiple versions of the 'Airbag Deploy' model from Civitai.
Each version's files, including a specific README, ar... | [] |
Stableyogi/Cherry-Gift-Wrap-Dress | Stableyogi | 2026-02-21T21:59:48Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"text-to-image",
"sd-1.5",
"en",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:other",
"region:us"
] | text-to-image | 2026-02-21T21:59:30Z | # Cherry Gift Wrap Dress
A LoRA for generating specific clothing styles and fashion items.
## Compatibility
| Property | Value |
|----------|-------|
| **Type** | LoRA |
| **Base Model** | SD 1.5 |
| **Format** | SafeTensors |
## Trigger Words
```
Cherry Gift Wrap Dress
```
## Usage
### Autom... | [] |
juyoungggg/smolvla-0407-0408-random-crop | juyoungggg | 2026-04-28T06:23:47Z | 31 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:juyoungggg/0407-0408-merged",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-20T21:14:55Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
dom0804/konkani_companion_GGUF | dom0804 | 2026-04-14T18:07:33Z | 0 | 0 | null | [
"gguf",
"qwen2",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-14T18:07:04Z | # konkani_companion_GGUF : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `llama-cli -hf dom0804/konkani_companion_GGUF --jinja`
- For multimodal models: `llama-mtmd-cli -hf dom0804/konkani_companion_GGUF --... | [
{
"start": 132,
"end": 139,
"text": "unsloth",
"label": "training method",
"score": 0.7927132844924927
},
{
"start": 525,
"end": 532,
"text": "unsloth",
"label": "training method",
"score": 0.7309314608573914
}
] |
Sainath001/stomata-keypoint-benchmark-cvpr-agrivision-2026-models | Sainath001 | 2026-04-06T16:56:00Z | 0 | 0 | null | [
"safetensors",
"stomata",
"plant-phenotyping",
"keypoint-detection",
"object-detection",
"computer-vision",
"microscopy",
"agriculture",
"maize",
"cvpr",
"agrivision",
"en",
"doi:10.57967/hf/8279",
"license:cc-by-nc-4.0",
"region:us"
] | object-detection | 2026-04-06T15:00:05Z | # Stomata Keypoint Detection: Finetuned Model Checkpoints
This repository contains the finetuned model checkpoints used in our CVPR 2026 AgriVision Workshop paper:
**Towards Morphology Aware Stomata Keypoint Detection: Benchmarking Foundation Models Under Distribution Shift**
- **Paper:** Coming soon
- **Dataset:** ... | [
{
"start": 534,
"end": 542,
"text": "KP-Train",
"label": "training method",
"score": 0.8421058654785156
}
] |
TurkishCodeMan/vit-lung-cancer | TurkishCodeMan | 2026-03-11T12:21:38Z | 29 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"vision-transformer",
"lung-cancer",
"medical-imaging",
"pytorch",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-classification | 2026-03-11T12:18:01Z | # 🫁 ViT Lung Cancer Classifier
Fine-tuned **Vision Transformer (ViT-Base/16)** for lung cancer CT image classification
into 3 classes: **normal**, **malignant**, and **benign**.
## 📊 Model Details
| Property | Value |
|---|---|
| Base Model | `google/vit-base-patch16-224` |
| Task | Image Classification (3 classes... | [] |
Kshitijk20/flan-t5-base-samsum | Kshitijk20 | 2026-01-18T13:01:20Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2026-01-18T12:59:56Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-samsum
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on th... | [] |
devisri050/flan-t5-large-Q4_K_S-GGUF | devisri050 | 2025-12-29T07:37:14Z | 0 | 0 | null | [
"gguf",
"text2text-generation",
"llama-cpp",
"gguf-my-repo",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:svakulenk0/qrecc",
"dataset:taskmaster2",
"dataset:djaym7/wiki_dialog",
"dataset:deepmind/code_contests",
"dataset:lambada",
"dataset:gsm8k",
"dataset:aqua_rat",
"dataset:esn... | null | 2025-12-29T07:37:10Z | # devisri050/flan-t5-large-Q4_K_S-GGUF
This model was converted to GGUF format from [`google/flan-t5-large`](https://huggingface.co/google/flan-t5-large) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co... | [] |
mradermacher/RynnBrain-2B-i1-GGUF | mradermacher | 2026-02-14T10:18:57Z | 159 | 0 | transformers | [
"transformers",
"gguf",
"robotics",
"embodied-ai",
"egocentric",
"spatiotemporal",
"vision-language-model",
"video-understanding",
"grounding",
"planning",
"navigation",
"ocr",
"image-text-to-text",
"video-text-to-text",
"custom_code",
"en",
"zh",
"base_model:Alibaba-DAMO-Academy/R... | robotics | 2026-02-14T10:06:37Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
Muhaaaaaaaa/skebob_style_LoRA | Muhaaaaaaaa | 2026-03-17T08:45:00Z | 23 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"re... | text-to-image | 2026-02-25T12:12:18Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - Muhaaaaaaaa/skebob_style_LoRA
<Gallery />
## Model description
These are Muhaaaaaaaa/skebob_sty... | [
{
"start": 204,
"end": 208,
"text": "LoRA",
"label": "training method",
"score": 0.7614587545394897
},
{
"start": 328,
"end": 332,
"text": "LoRA",
"label": "training method",
"score": 0.8127369284629822
},
{
"start": 475,
"end": 479,
"text": "LoRA",
"l... |
smutuvi/finetuning-whisper-small-swahili-asr-model_ndizi | smutuvi | 2025-12-09T05:16:20Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-11-17T14:11:11Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-whisper-swahili-asr-model_ndizi-3epochs
This model is a fine-tuned version of [openai/whisper-small](https://huggingfa... | [] |
JonusNattapong/transformer-classifier-gc1h | JonusNattapong | 2025-10-01T22:15:14Z | 0 | 1 | pytorch | [
"pytorch",
"financial-forecasting",
"time-series",
"transformer",
"algorithmic-trading",
"gold",
"model-index",
"region:us"
] | null | 2025-10-01T21:21:45Z | # 📈 Time Series Transformer Classifier for Algorithmic Trading
This repository provides a **Transformer-based time series classifier** trained on Gold Futures (**GC=F, 1-hour timeframe**) to predict short-term price direction. The model outputs **Up, Flat, or Down** classes which can be used to generate trading signa... | [] |
scvi-tools/tabula-sapiens-ear-stereoscope | scvi-tools | 2026-03-01T09:47:28Z | 0 | 0 | scvi-tools | [
"scvi-tools",
"biology",
"genomics",
"single-cell",
"model_cls_name:RNAStereoscope",
"scvi_version:1.4.2",
"anndata_version:0.12.7",
"modality:rna",
"tissue:various",
"annotated:True",
"license:cc-by-4.0",
"region:us"
] | null | 2026-02-26T23:14:25Z | Stereoscope is a variational inference model for single-cell RNA-seq data that can learn a
cell-type specific rate of gene expression. The predictions of the model are meant to be afterward
used for deconvolution of a second spatial transcriptomics dataset in Stereoscope. Stereoscope
predicts the cell-type proportions ... | [] |
rbelanec/train_conala_1756729619 | rbelanec | 2025-09-01T12:44:02Z | 4 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"prefix-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-01T12:27:41Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_conala_1756729619
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-l... | [] |
hnv2520/LNG_GSPO_llava-1.5-7b-hf-Thinking | hnv2520 | 2025-11-12T03:21:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"unsloth",
"arxiv:2402.03300",
"base_model:unsloth/llava-1.5-7b-hf-bnb-4bit",
"base_model:finetune:unsloth/llava-1.5-7b-hf-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-11-12T03:19:14Z | # Model Card for LNG_GSPO_llava-1.5-7b-hf-Thinking
This model is a fine-tuned version of [unsloth/llava-1.5-7b-hf-bnb-4bit](https://huggingface.co/unsloth/llava-1.5-7b-hf-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
questi... | [] |
nickagge/paladim-1b-medical | nickagge | 2025-11-30T10:03:38Z | 6 | 0 | transformers | [
"transformers",
"roberta",
"medical",
"drug-recommendation",
"continual-learning",
"mixture-of-experts",
"lora",
"healthcare",
"pharmacology",
"text-classification",
"en",
"dataset:custom",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-30T09:12:53Z | # PALADIM: Pre-Adaptive Learning Architecture with Dual-Process Hebbian-MoE Schema
**A 1.04B parameter continual learning model for medical drug recommendation**
---
## Ελληνική Περίληψη (Greek Summary)
**Τι είναι το PALADIM;**
Το PALADIM είναι ένα μοντέλο τεχνητής νοημοσύνης με **1.04 δισεκατομμύρια παρ... | [] |
AEmotionStudio/stable-video-diffusion-img2vid-xt | AEmotionStudio | 2026-03-21T04:26:21Z | 0 | 0 | null | [
"stable-video-diffusion",
"svd",
"video-generation",
"mirror",
"license:other",
"region:us"
] | null | 2026-03-21T04:25:39Z | # Stable Video Diffusion — img2vid-xt (Pipeline Mirror)
Mirror of [stabilityai/stable-video-diffusion-img2vid-xt](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt) for use with [ComfyUI-FFMPEGA](https://github.com/AEmotionStudio/ComfyUI-FFMPEGA).
## Contents
This mirror contains only the pipeline... | [] |
miladfa7/picth_vision_checkpoint_7 | miladfa7 | 2025-09-09T05:12:04Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base-finetuned-kinetics",
"base_model:finetune:MCG-NJU/videomae-base-finetuned-kinetics",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2025-09-09T01:43:37Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# picth_vision_checkpoint_7
This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co... | [] |
sfutenma/lora_structeval_t_qwen3_4b_v260217-113350 | sfutenma | 2026-02-17T02:34:04Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v5",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-17T02:33:52Z | # lora_structeval_t_qwen3_4b_v260217-113350
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to... | [
{
"start": 145,
"end": 150,
"text": "QLoRA",
"label": "training method",
"score": 0.8234883546829224
},
{
"start": 586,
"end": 591,
"text": "QLoRA",
"label": "training method",
"score": 0.7499712109565735
}
] |
Givenn/Qwen3-4B-Roleplay-Chinese | Givenn | 2026-04-25T09:20:36Z | 0 | 0 | null | [
"roleplay",
"角色扮演",
"chinese",
"sft",
"conversational",
"creative-writing",
"digital-human",
"text-generation",
"zh",
"en",
"dataset:shibing624/roleplay-zh-sharegpt-gpt4-data",
"dataset:silk-road/ChatHaruhi-54K-Role-Playing-Dialogue",
"arxiv:2502.09082",
"arxiv:2501.15427",
"arxiv:2308.0... | text-generation | 2026-04-25T09:10:59Z | # Qwen3-4B-Roleplay-Chinese 🎭
基于 [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B) 微调的**中文角色扮演**模型,专为数字人对话和AI小说场景优化。
## ✨ 特点
- 🎭 **深度角色扮演**:完全沉浸式角色扮演,保持角色一致性
- ⚡ **剧情推进快**:快速响应,推动故事发展
- 🌊 **沉浸感强**:细腻的情感表达,动作描写,环境描绘
- 📖 **中文原生**:使用纯中文角色扮演数据训练
- 🏮 **文学角色覆盖**:韦小宝、令狐冲等经典金庸角色 + 原创角色
## 📊 训练数据
| 数据集 | 样本数 | 来源... | [] |
Yuiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii/melondrop-ai | Yuiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii | 2026-02-16T02:37:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"dataset:Yuiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii/melondropasset",
"base_model:mistralai/Mistral-7B-v0.3",
"base_model:finetune:mistralai/Mistral-7B-v0.3",
"license:other",
"endpoints... | text-generation | 2026-02-16T02:36:23Z | # Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path... | [] |
yeray1234/stable-diffusion-xl-1.0-inpainting-0.1 | yeray1234 | 2026-03-16T06:22:41Z | 10 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"inpainting",
"arxiv:2112.10752",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"diffusers:StableDif... | text-to-image | 2026-03-16T06:22:40Z | ---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- inpainting
inference: false
---
# SD-XL Inpainting 0.1 Model Card

SD-XL Inpainting 0.1 is a latent text-t... | [] |
contemmcm/004247259dc20293acf19cda27906bdb | contemmcm | 2025-11-21T09:36:17Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-classification",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-21T09:10:44Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 004247259dc20293acf19cda27906bdb
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.... | [] |
vantagewithai/LTX-2-Split | vantagewithai | 2026-01-10T14:06:41Z | 0 | 7 | diffusers | [
"diffusers",
"safetensors",
"image-to-video",
"text-to-video",
"video-to-video",
"image-text-to-video",
"audio-to-video",
"text-to-audio",
"video-to-audio",
"audio-to-audio",
"text-to-audio-video",
"image-to-audio-video",
"image-text-to-audio-video",
"ltx-2",
"ltx-video",
"ltxv",
"li... | image-to-video | 2026-01-07T18:14:27Z | **Split version of Split LTX-2 checkpoint - Model/VAE/Audio VAE/Text Encoder**
**Original model Link:** [https://huggingface.co/Lightricks/LTX-2](https://huggingface.co/Lightricks/LTX-2)
**Watch us at Youtube:** [@VantageWithAI](https://www.youtube.com/@vantagewithai)
# LTX-2 Model Card
This model card focuses on th... | [] |
neolu15/adv-nlp-hw1-weishao3 | neolu15 | 2025-10-14T14:54:06Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset... | sentence-similarity | 2025-10-14T14:54:03Z | # all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](ht... | [] |
WindyWord/translate-no-da | WindyWord | 2026-04-20T13:31:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"norwegian",
"danish",
"no",
"da",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | translation | 2026-04-19T05:05:49Z | # WindyWord.ai Translation — Norwegian → Danish
**Translates Norwegian → Danish.**
**Quality Rating: ⭐⭐⭐⭐⭐ (5.0★ Premium)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs.
## Quality & Pricing Tier
- **5-star rating:** 5.0★ ⭐⭐⭐⭐⭐
- **Tier:** Premium
- **Comp... | [] |
Rahaf2001/Sabiq | Rahaf2001 | 2026-04-03T12:23:33Z | 0 | 0 | null | [
"object-detection",
"road-damage",
"yolo",
"computer-vision",
"dataset:RDD2022",
"license:apache-2.0",
"region:us"
] | object-detection | 2026-04-03T12:13:35Z | # SABIQ — Road Damage Detection Model
Proactive road defect detection system.
## Model Details
- **Architecture:** YOLO26m
- **Base Model:** yolo26m.pt (Ultralytics)
- **Dataset:** RDD2022 (Road Damage Detection 2022)
- **Classes:** crack, other, pothole
- **mAP50:** 0.636
- **Epochs:** 65
- **Image Size:** 640
- **T... | [] |
something-human/bert-base-uncased-finetuned-mrpc-run_1 | something-human | 2025-11-30T20:00:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-30T19:24:15Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-mrpc-run_1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base... | [
{
"start": 190,
"end": 228,
"text": "bert-base-uncased-finetuned-mrpc-run_1",
"label": "training method",
"score": 0.7800390124320984
},
{
"start": 269,
"end": 286,
"text": "bert-base-uncased",
"label": "training method",
"score": 0.804114818572998
},
{
"start": 3... |
AmanPriyanshu/gpt-oss-12.6b-specialized-all-pruned-moe-only-18-experts | AmanPriyanshu | 2025-08-13T02:27:30Z | 10 | 1 | null | [
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"all",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyan... | text-generation | 2025-08-13T02:26:53Z | # All GPT-OSS Model (18 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/... | [] |
b1n1yam/shook-tiny-amharic-600hr | b1n1yam | 2025-11-21T02:01:58Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"am",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-11-18T20:57:50Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny Amharic - Biniyam Daniel
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-... | [
{
"start": 600,
"end": 618,
"text": "Training procedure",
"label": "training method",
"score": 0.7341523170471191
}
] |
geodesic-research/sfm_unfiltered_e2e_alignment_upsampled_pretraining_stage | geodesic-research | 2026-01-16T10:53:20Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"conversational",
"arxiv:2601.10160",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-27T04:32:52Z | # Alignment Pretraining Model Suite
Pretraining corpora contain extensive discourse about AI systems, yet the causal influence of this discourse on downstream alignment remains poorly understood. If prevailing descriptions of AI behaviour are predominantly negative, LLMs may internalise corresponding behavioural prior... | [
{
"start": 562,
"end": 583,
"text": "Alignment Pretraining",
"label": "training method",
"score": 0.7500285506248474
},
{
"start": 677,
"end": 704,
"text": "Alignment Pretraining Suite",
"label": "training method",
"score": 0.7509098649024963
}
] |
HiTZ/Latxa-Qwen3-VL-8B-Instruct | HiTZ | 2026-02-23T09:19:21Z | 333 | 2 | transformers | [
"transformers",
"safetensors",
"qwen3_vl",
"image-text-to-text",
"conversational",
"eu",
"gl",
"ca",
"es",
"en",
"dataset:HiTZ/latxa-corpus-v1.1",
"base_model:Qwen/Qwen3-VL-8B-Instruct",
"base_model:finetune:Qwen/Qwen3-VL-8B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"reg... | image-text-to-text | 2026-02-19T08:51:04Z | # Model Card for HiTZ/Latxa-Qwen3-VL-8B-Instruct
<p align="center">
<img src="https://raw.githubusercontent.com/hitz-zentroa/latxa/refs/heads/main/assets/latxa_vision_circle.png" style="height: 350px;">
</p>
Latxa-Qwen3-VL-8B-Instruct is a Basque-adapted multimodal and multilingual instruct model built on top of Qw... | [] |
nubes43/finetuning-sentiment-model-3000-samples | nubes43 | 2025-11-01T12:17:18Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-01T10:46:21Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/di... | [] |
mrdbourke/FoodExtract-gemma-3-270m-fine-tune-v1 | mrdbourke | 2026-03-17T01:14:00Z | 746 | 1 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-08T00:15:06Z | # FoodExtract-v1
This is a food and drink extraction language model built on [Gemma 3 270M](https://huggingface.co/google/gemma-3-270m-it).
Given raw text, it's designed to:
1. Classify the text into food or drink (e.g. "a photo of a dog" = not food or drink, "a photo of a pizza" = food or drink).
2. Tag the text wi... | [] |
arun-ghontale/cppo-g16-p0875 | arun-ghontale | 2026-04-15T23:00:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:... | text-generation | 2026-04-14T04:49:31Z | # Model Card for cppo-g16-p0875
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine... | [] |
nvidia/parakeet-tdt_ctc-1.1b | nvidia | 2025-02-18T13:41:32Z | 1,986 | 22 | nemo | [
"nemo",
"automatic-speech-recognition",
"speech",
"audio",
"Transducer",
"TDT",
"FastConformer",
"Conformer",
"pytorch",
"NeMo",
"hf-asr-leaderboard",
"en",
"dataset:librispeech_asr",
"dataset:fisher_corpus",
"dataset:mozilla-foundation/common_voice_8_0",
"dataset:National-Singapore-Co... | automatic-speech-recognition | 2024-05-07T11:42:30Z | # Parakeet TDT-CTC 1.1B PnC(en)
<style>
img {
display: inline;
}
</style>
[](#model-architecture)
| [](#model-architecture)
| [![Language... | [] |
kishl/IndicScriptureQA-RL | kishl | 2026-04-10T11:24:44Z | 0 | 0 | null | [
"reinforcement-learning",
"en",
"hi",
"sa",
"license:mit",
"region:us"
] | reinforcement-learning | 2026-04-08T14:09:44Z | # IndicScriptureQA — OpenEnv Environment
**Semantic structure and factual grounding evaluation for low-resource Indic languages.**
Most LLM benchmarks for Hindi, Sanskrit, and other Indic languages test surface-level factual recall — did the model get the right answer? This environment goes further. It evaluates whet... | [] |
Karajan42/diamond-doom-hg-v5.3 | Karajan42 | 2026-04-04T15:53:52Z | 0 | 0 | null | [
"world-model",
"diffusion",
"doom",
"game-engine",
"diamond",
"arxiv:2603.06679",
"license:mit",
"region:us"
] | null | 2026-04-04T15:53:11Z | # Diamond Doom Health Gathering v5.3
A diffusion-based world model trained on Doom Health Gathering maps with **zero entity persistence regression** through 490 effective epochs.
## Key Features
- **69.8M parameter** EDM (Elucidated Diffusion Model) with AR4 autoregressive conditioning
- **6-channel minimap conditio... | [] |
uyu1/OmniCoder-9B | uyu1 | 2026-03-20T04:13:42Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"qwen3.5",
"code",
"agent",
"sft",
"omnicoder",
"tesslate",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen3.5-9B",
"base_model:finetune:Qwen/Qwen3.5-9B",
"license:apache-2.0",
"model-index",
"endpoint... | text-generation | 2026-03-20T04:13:42Z | <div align="center">
<img src="omnicoder-banner.png" alt="OmniCoder" width="720">
# OmniCoder-9B
### A 9B coding agent fine-tuned on 425K agentic trajectories.
[](https://opensource.org/licenses/Apache-2.0)
[.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you h... | [] |
bobber/routangseng-qwen35-0.8b-abliterated | bobber | 2026-03-10T07:04:30Z | 121 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"conversational",
"base_model:huihui-ai/Huihui-Qwen3.5-0.8B-abliterated",
"base_model:finetune:huihui-ai/Huihui-Qwen3.5-0.8B-abliterated",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-03-10T04:30:49Z | # Model Card for sft-qwen35-0.8b-abliterated-run1
This model is a fine-tuned version of [huihui-ai/Huihui-Qwen3.5-0.8B-abliterated](https://huggingface.co/huihui-ai/Huihui-Qwen3.5-0.8B-abliterated).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import... | [] |
yaswanth8390/pi0_tsst1 | yaswanth8390 | 2026-04-15T23:24:13Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"pi0",
"robotics",
"dataset:yaswanth8390/merged",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-15T23:20:07Z | # Model Card for pi0
<!-- Provide a quick summary of what the model is/does. -->
**π₀ (Pi0)**
π₀ is a Vision-Language-Action model for general robot control, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀ represents a breakthrough ... | [] |
mradermacher/harshit-ai-GGUF | mradermacher | 2025-11-23T22:32:37Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:Harshit110/harshit-ai",
"base_model:quantized:Harshit110/harshit-ai",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-11-23T22:31:20Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
jncraton/gemma-3-1b-it-ct2-int8 | jncraton | 2025-12-04T02:06:40Z | 5 | 0 | transformers | [
"transformers",
"text-generation",
"conversational",
"arxiv:1905.07830",
"arxiv:1905.10044",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1705.03551",
"arxiv:1911.01547",
"arxiv:1907.10641",
"arxiv:1903.00161",
"arxiv:2009.03300",
"arxiv:2304.06364",
"arxiv:2103.03874",
"arxiv:2110.141... | text-generation | 2025-12-04T02:06:17Z | # Gemma 3 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core)
**Resources and Technical Documentation**:
* [Gemma 3 Technical Report][g3-tech-report]
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma3]
**Terms ... | [] |
Etherll/Tashkeel-350M-v2 | Etherll | 2025-10-29T13:46:24Z | 45 | 2 | transformers | [
"transformers",
"safetensors",
"granitemoehybrid",
"text-generation",
"text-generation-inference",
"unsloth",
"language",
"granite-4.0",
"trl",
"sft",
"arabic",
"conversational",
"ar",
"dataset:Misraj/Sadeed_Tashkeela",
"base_model:ibm-granite/granite-4.0-h-350m",
"base_model:finetune:... | text-generation | 2025-10-29T13:17:51Z | # Tashkeel-350M
**Arabic Diacritization Model** | **نَمُوذَجُ تَشْكِيلِ النُّصُوصِ الْعَرَبِيَّةِ**
نموذج بحجم 350 مليون بارامتر مخصص لتشكيل النصوص العربية. تم تدريب هذا النموذج بضبط نموذج
`ibm-granite/granite-4.0-h-350m`
على مجموعة البيانات
`Misraj/Sadeed_Tashkeela`.
- **النموذج الأساسي:** [ibm-granite/granit... | [] |
FluxiIA/AgentTools-BR-Qwen_4b_Zephyr_template-Q8_0-GGUF | FluxiIA | 2025-10-21T02:21:47Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:FluxiIA/AgentTools-BR-Qwen_4b_Zephyr_template",
"base_model:quantized:FluxiIA/AgentTools-BR-Qwen_4b_Zephyr_template",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-10-21T02:21:27Z | # FluxiIA/AgentTools-BR-Qwen_4b_Zephyr_template-Q8_0-GGUF
This model was converted to GGUF format from [`FluxiIA/AgentTools-BR-Qwen_4b_Zephyr_template`](https://huggingface.co/FluxiIA/AgentTools-BR-Qwen_4b_Zephyr_template) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-r... | [] |
agh123/m8s-memento-v2 | agh123 | 2025-11-27T08:51:13Z | 9 | 0 | null | [
"gguf",
"qwen3",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-11-27T08:32:32Z | # m8s-memento-v2 - GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: **llama-cli** **--hf** repo_id/model_name **-p** "why is the sky blue?"
- For multimodal models: **llama-mtmd-cli** **-m** model_name.gguf **... | [] |
langtuphongtran/thay-man-hinh-iphone-chinh-hang-gia-re | langtuphongtran | 2025-09-08T10:03:25Z | 0 | 0 | null | [
"region:us"
] | null | 2025-09-08T10:01:39Z | <h1>Dịch vụ thay màn hình iPhone chất lượng tại Bệnh Viện Điện Thoại, Laptop 24h</h1>
<p>Việc <a href="https://chamsocdidong.com/thay-man-hinh-iphone-sc4472.html" target="_blank">thay màn hình ip</a> là điều không thể tránh khỏi đối với những người sử dụng iPhone khi... | [] |
afford6522/vit-beans-classifier | afford6522 | 2025-11-28T15:23:36Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-11-28T15:03:25Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-beans-classifier
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-p... | [] |
prashanth058/qwen2.5-3b-vl-flickr-lora-vision | prashanth058 | 2025-12-25T06:53:42Z | 0 | 0 | peft | [
"peft",
"safetensors",
"vision",
"lora",
"vision-language",
"multimodal",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-VL-3B-Instruct",
"region:us"
] | null | 2025-11-23T18:49:19Z | # Vision LoRA Adapter
This is a LoRA adapter for vision-language models, trained to adapt vision tower and connector layers in addition to language model layers.
## Model Details
- **Base Model**: Qwen/Qwen2.5-VL-3B-Instruct
- **LoRA Rank**: 16
- **LoRA Alpha**: 32
- **Target Modules**:
- Language Model: ✓
- Vis... | [] |
Clemylia/Mini-emote-ONNX | Clemylia | 2026-01-16T11:02:18Z | 1 | 0 | transformers.js | [
"transformers.js",
"onnx",
"gpt2",
"text-generation",
"emojis",
"original language",
"Mini",
"base_model:Finisha-LLM/Mini-emote",
"base_model:quantized:Finisha-LLM/Mini-emote",
"doi:10.57967/hf/7550",
"license:other",
"region:us"
] | text-generation | 2026-01-16T11:02:16Z | # Mini-emote (ONNX)
This is an ONNX version of [Finisha-LLM/Mini-emote](https://huggingface.co/Finisha-LLM/Mini-emote). It was automatically converted and uploaded using [this Hugging Face Space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
## Usage with Transformers.js
See the pipeline document... | [] |
Muapi/flux-urushihara-satoshi-langrisser-front-innocent-artist-style | Muapi | 2025-08-25T07:43:44Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-25T07:43:27Z | # [Flux] Urushihara Satoshi/漆原智志 《Langrisser》/《梦幻模拟战》 ,《Front Innocent》 - Artist Style

**Base model**: Flux.1 D
**Trained words**:
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.m... | [] |
FabianKerj/6fbf7ac2-a19f-451f-8d25-016b71b63d08 | FabianKerj | 2026-03-04T18:14:23Z | 12 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"pi05",
"dataset:qualiaadmin/mandminbox",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-04T18:13:21Z | # Model Card for pi05
<!-- Provide a quick summary of what the model is/does. -->
**π₀.₅ (Pi05) Policy**
π₀.₅ is a Vision-Language-Action model with open-world generalization, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀.₅ repres... | [] |
katsukiono/gemma3-270m-pred-dpo | katsukiono | 2026-01-03T15:08:00Z | 193 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"gemma3_text",
"text-generation",
"gemma3",
"japanese",
"ime",
"predictive-text",
"llama.cpp",
"mobile",
"conversational",
"ja",
"base_model:google/gemma-3-270m-it",
"base_model:quantized:google/gemma-3-270m-it",
"text-generation-inference",
"en... | text-generation | 2026-01-03T13:47:33Z | # gemma3-270m-pred-dpo
日本語IME(キーボードの予測変換)用途に最適化した **Gemma 3 270M** の軽量モデルです。
入力テキスト中の境界 **`[---]`** の“後ろ”に続く言葉を予測変換として生成します。
- ベースモデル: `google/gemma-3-270m-it`
- 学習: SFT → DPO(選好最適化)
- 配布: Transformers形式(HF) + GGUF(f16 / Q4_K_M)
---
## iOS 実測(参考)
> 実行環境・スレッド数・量子化設定で変動します。
- 12 ms/token
- 82.94 tokens/sec
- ... | [] |
devisri050/OpenMath-Nemotron-1.5B-Q4_K_S-GGUF | devisri050 | 2025-12-30T06:51:19Z | 24 | 0 | transformers | [
"transformers",
"gguf",
"nvidia",
"math",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:nvidia/OpenMathReasoning",
"base_model:nvidia/OpenMath-Nemotron-1.5B",
"base_model:quantized:nvidia/OpenMath-Nemotron-1.5B",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us",
... | text-generation | 2025-12-30T06:51:07Z | # devisri050/OpenMath-Nemotron-1.5B-Q4_K_S-GGUF
This model was converted to GGUF format from [`nvidia/OpenMath-Nemotron-1.5B`](https://huggingface.co/nvidia/OpenMath-Nemotron-1.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model c... | [] |
mradermacher/gemma-4-21b-a4b-it-REAP-heretic-GGUF | mradermacher | 2026-04-14T14:05:08Z | 2,117 | 2 | transformers | [
"transformers",
"gguf",
"safetensors",
"gemma4",
"moe",
"pruning",
"reap",
"cerebras",
"expert-pruning",
"heretic",
"uncensored",
"decensored",
"abliterated",
"ara",
"en",
"base_model:coder3101/gemma-4-21b-a4b-it-REAP-heretic",
"base_model:quantized:coder3101/gemma-4-21b-a4b-it-REAP-... | null | 2026-04-12T07:50:31Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: 1 -->
static ... | [] |
Pankayaraj/DA-SFT-MODEL-gemma-3-1b-it-DATASET-STAR-41K-DA-Filtered-DeepSeek-R1-Distill-Qwen-7B | Pankayaraj | 2026-04-14T02:45:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"en",
"arxiv:2604.09665",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2026-03-31T19:14:03Z | ---
# Deliberative Alignment is Deep, but Uncertainty Remains: Inference time safety improvement in reasoning via attribution of unsafe behavior to base model
## Overview
This model is trained as of the work of "Deliberative Alignment is Deep, but Uncertainty Remains: Inference time safety improvement in reasoning vi... | [] |
manancode/opus-mt-en-luo-ctranslate2-android | manancode | 2025-08-16T11:17:03Z | 0 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] | translation | 2025-08-16T11:16:43Z | # opus-mt-en-luo-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-luo` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-en-luo
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted ... | [] |
Mazino0/moonshine-streaming-small-onnx | Mazino0 | 2026-02-23T23:53:22Z | 10 | 1 | onnxruntime | [
"onnxruntime",
"onnx",
"moonshine_streaming",
"int8",
"quantized",
"speech-recognition",
"asr",
"streaming",
"moonshine",
"automatic-speech-recognition",
"en",
"arxiv:2602.12241",
"base_model:UsefulSensors/moonshine-streaming-small",
"base_model:quantized:UsefulSensors/moonshine-streaming-... | automatic-speech-recognition | 2026-02-23T23:49:44Z | # Moonshine v2 Streaming Small — ONNX INT8
ONNX INT8 (dynamic quantization) export of [UsefulSensors/moonshine-streaming-small](https://huggingface.co/UsefulSensors/moonshine-streaming-small), a fast streaming ASR model designed for real-time on-device speech recognition.
Based on the paper: [Moonshine v2: Ergodic St... | [] |
mradermacher/Midnight-Miqu-70B-v1.5_ChatML-i1-GGUF | mradermacher | 2026-02-18T10:00:09Z | 956 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Sicarius-Prototyping/Midnight-Miqu-70B-v1.5_ChatML",
"base_model:quantized:Sicarius-Prototyping/Midnight-Miqu-70B-v1.5_ChatML",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-02-17T13:58:47Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
mlx-community/SERA-32B-GA-6bit | mlx-community | 2026-01-28T01:21:24Z | 4 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:allenai/SERA-32B-GA",
"base_model:quantized:allenai/SERA-32B-GA",
"license:apache-2.0",
"6-bit",
"region:us"
] | text-generation | 2026-01-27T23:09:13Z | # mlx-community/SERA-32B-GA-6bit
This model [mlx-community/SERA-32B-GA-6bit](https://huggingface.co/mlx-community/SERA-32B-GA-6bit) was
converted to MLX format from [allenai/SERA-32B-GA](https://huggingface.co/allenai/SERA-32B-GA)
using mlx-lm version **0.30.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```py... | [] |
sulpikar2/agent-Qwen3.5-9B-Claude-4.6-Opus-abliterated-heretic | sulpikar2 | 2026-03-23T20:06:20Z | 25 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"abliterated",
"uncensored",
"Claude",
"reasoning",
"chain-of-thought",
"Dense",
"heretic",
"decensored",
"conversational",
"base_model:Jackrong/Qwen3.5-9B-Claude-4.6-Opus-Reasoning-Distilled-v2",
"base_model:finetune:Jackro... | image-text-to-text | 2026-03-23T20:03:10Z | # This is a decensored version of [sulpikar2/Huihui-Qwen3.5-9B-Claude-4.6-Opus-abliterated-heretic](https://huggingface.co/sulpikar2/Huihui-Qwen3.5-9B-Claude-4.6-Opus-abliterated-heretic), made using [Heretic](https://github.com/p-e-w/heretic) v1.2.0
## Abliteration parameters
| Parameter | Value |
| :-------- | :---... | [] |
Flavio0834/a2c-CartPole-v1 | Flavio0834 | 2026-03-11T09:38:24Z | 36 | 0 | stable-baselines3 | [
"stable-baselines3",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2026-03-11T09:33:57Z | # **A2C** Agent playing **CartPole-v1**
This is a trained model of a **A2C** agent playing **CartPole-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
```python
import gymnasium as gym
from stable_baselines3 import A2C
from stable_baselines3.c... | [
{
"start": 4,
"end": 7,
"text": "A2C",
"label": "training method",
"score": 0.7135626077651978
},
{
"start": 71,
"end": 74,
"text": "A2C",
"label": "training method",
"score": 0.7488173842430115
},
{
"start": 292,
"end": 295,
"text": "A2C",
"label": "t... |
ReadyArt/Brisk-Evolution-9B-v0.2 | ReadyArt | 2026-03-25T02:45:44Z | 10 | 1 | null | [
"safetensors",
"qwen3_5",
"nsfw",
"explicit",
"roleplay",
"unaligned",
"dangerous",
"ERP",
"Other License",
"base_model:Qwen/Qwen3.5-9B",
"base_model:finetune:Qwen/Qwen3.5-9B",
"license:apache-2.0",
"region:us"
] | null | 2026-03-25T02:42:03Z | <style>
body {
font-family: 'Quicksand', sans-serif;
background-color: #222; /* Lighter dark background */
color: #e0e0e0; /* Off-white text for better contrast */
text-shadow: 0 0 3px rgba(0, 0, 0, 0.5); /* Softer text shadow */
margin: 0;
padding: 20px;
transition: all 0.5s ease;
}
@media ... | [] |
FrankCCCCC/ddpm-ema-92k_cfm-corr-50-ss0.0-ep500-ema-92k-run0 | FrankCCCCC | 2025-10-03T06:18:05Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"diffusers:DDPMCorrectorPipeline",
"region:us"
] | null | 2025-10-03T05:09:39Z | # cfm_corr_50_ss0.0_ep500_ema-92k-run0
This repository contains model artifacts and configuration files from the CFM_CORR_EMA_50k experiment.
## Contents
This folder contains:
- Model checkpoints and weights
- Configuration files (JSON)
- Scheduler and UNet components
- Training results and metadata
- Sample directo... | [] |
rinarina0429/pruned-llama2-7b | rinarina0429 | 2025-09-14T14:40:10Z | 0 | 0 | transformers | [
"transformers",
"pruned",
"progressive-loading",
"llama",
"safetensors",
"text-generation",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-09-14T14:39:38Z | # pruned-llama2-7b
## 모델 설명
이 모델은 Progressive Loading을 지원하는 프루닝된 LLaMA 모델입니다.
## 파일 구조
- `P.safetensors`: 주요 레이어들 (프루닝된 모델의 핵심 부분)
- `R1.safetensors`: 복원 레이어 그룹 1
- `R2.safetensors`: 복원 레이어 그룹 2
- `manifest.json`: 레이어 매핑 정보 및 메타데이터
- `original_config/`: 원본 모델 설정 및 토크나이저
- `prune_map.json`: 프루닝 맵 정보
## 사용 방법
### P... | [] |
villekuosmanen/rewact_build_block_tower_1.3.0 | villekuosmanen | 2025-12-12T21:25:12Z | 1 | 0 | lerobot | [
"lerobot",
"safetensors",
"rewact",
"robotics",
"dataset:villekuosmanen/build_block_tower",
"dataset:villekuosmanen/dAgger_build_block_tower_1.0.0",
"dataset:villekuosmanen/dAgger_build_block_tower_1.1.0",
"dataset:villekuosmanen/dAgger_build_block_tower_1.2.0",
"dataset:villekuosmanen/fail_build_bl... | robotics | 2025-12-12T21:24:56Z | # Model Card for rewact
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface... | [] |
jialicheng/unlearn_ucf101_videomae-base_scrub_6_42 | jialicheng | 2025-11-07T21:22:27Z | 0 | 0 | null | [
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"region:us"
] | video-classification | 2025-11-07T21:11:26Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ucf101_42
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on the ucf1... | [] |
DimaSK1/Qwen2.5-3B-sft1 | DimaSK1 | 2026-03-26T08:55:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"endpoints_compatible",
"region:us"
] | null | 2026-03-26T08:55:05Z | # Model Card for Qwen2.5-3B-sft1
This model is a fine-tuned version of [Qwen/Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go t... | [] |
chiabingxuan/v2-heladepdet-bert-finetuned-regression | chiabingxuan | 2026-04-02T07:25:26Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:google-bert/bert-base-cased",
"lora",
"transformers",
"base_model:google-bert/bert-base-cased",
"license:apache-2.0",
"region:us"
] | null | 2026-04-02T05:34:07Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
<a href="https://huggingface.co/spaces/chiabingxuan/DSA4262-FineTuning" target="_blank"><img src="https://raw.githubusercontent.com/... | [] |
mradermacher/metatune-gpt20b-R1.1-i1-GGUF | mradermacher | 2025-12-06T10:28:32Z | 226 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gpt_oss",
"en",
"dataset:EpistemeAI/recursive_self_improvement_dataset",
"base_model:EpistemeAI/metatune-gpt20b-R1.1",
"base_model:quantized:EpistemeAI/metatune-gpt20b-R1.1",
"license:apache-2.0",
"endpoints_compatible",
"region:... | null | 2025-11-09T04:28:56Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: MXFP4_MOE Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S ... | [] |
0xZeno/sdxl-base-1.0-LashGlow | 0xZeno | 2025-08-28T13:19:23Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2025-08-28T09:35:45Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - 0xZeno/sdxl-base-1.0-LashGlow
<Gallery />
## Model description
These are 0xZeno/sdxl-base-1.0-L... | [
{
"start": 328,
"end": 332,
"text": "LoRA",
"label": "training method",
"score": 0.7916220426559448
},
{
"start": 475,
"end": 479,
"text": "LoRA",
"label": "training method",
"score": 0.7906426787376404
}
] |
LumiOpen/llama-hpltv2-edu-classifier-xlm-roberta-large-ara-Arab | LumiOpen | 2025-08-25T16:12:42Z | 2 | 0 | null | [
"safetensors",
"xlm-roberta",
"ara",
"dataset:LumiOpen/hpltv2-llama33-edu-annotation",
"license:apache-2.0",
"region:us"
] | null | 2025-08-25T16:12:27Z | ---
language:
- ara
license: apache-2.0
datasets:
- LumiOpen/hpltv2-llama33-edu-annotation
---
# Llama-HPLT-edu-Arabic classifier
## Model summary
This is a classifier for judging the educational content of Arabic (ara-Arab) web pages. It was developed to filter educational content from [HPLT v2](https://hplt-project... | [] |
PSewmuthu/resnet50-emotion-recognition-ckplus-rafdb | PSewmuthu | 2025-10-08T10:17:04Z | 20 | 0 | tensorflow | [
"tensorflow",
"tflite",
"keras",
"emotion-recognition",
"resnet50",
"ckplus",
"rafdb",
"fine-tuning",
"computer-vision",
"deep-learning",
"facial-expression",
"affective-computing",
"en",
"doi:10.57967/hf/6653",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2025-10-08T09:14:27Z | # 🧠 Emotion Recognition Model – ResNet50 (Fine-Tuned on CK+ and RAF-DB)
## 📘 Overview
This repository presents a **fine-tuned ResNet50-based Emotion Recognition model** trained on the **CK+** and **RAF-DB** facial expression datasets. The model classifies facial emotions into seven categories and provides high ... | [] |
gopi87/Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled-Q4_K_M-GGUF | gopi87 | 2026-04-21T04:52:22Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"reasoning",
"distillation",
"chain-of-thought",
"qwen",
"qwen3.6",
"mixture-of-experts",
"moe",
"lora",
"unsloth",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:lordx64/reasoning-distill-opus-4-7-max-sft",
"base_model:lordx64/Qwen3.6-35B-A3B-... | text-generation | 2026-04-21T04:51:38Z | # gopi87/Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled-Q4_K_M-GGUF
This model was converted to GGUF format from [`lordx64/Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled`](https://huggingface.co/lordx64/Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled) using llama.cpp via the ggml.ai's [GGUF-my-repo](https... | [] |
Muapi/old-fps | Muapi | 2025-08-19T13:58:10Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T13:58:02Z | # Old FPS

**Base model**: Flux.1 D
**Trained words**: A screenshot of a video game, retro gaming, dos game, pov, fps
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/... | [] |
zsjTiger/GLM-4.7-Flash-Claude-Opus-4.5-High-Reasoning-Distill-GGUF | zsjTiger | 2026-03-05T01:12:27Z | 1,973 | 2 | null | [
"gguf",
"text-generation-inference",
"llama.cpp",
"unsloth",
"glm4_moe_lite",
"dataset:TeichAI/claude-4.5-opus-high-reasoning-250x",
"base_model:TeichAI/GLM-4.7-Flash-Claude-Opus-4.5-High-Reasoning-Distill",
"base_model:quantized:TeichAI/GLM-4.7-Flash-Claude-Opus-4.5-High-Reasoning-Distill",
"licens... | null | 2026-03-05T01:12:27Z | # GLM 4.7 Flash x Claude 4.5 Opus (High Reasoning)
This model was trained on a small reasoning dataset of **Claude Opus 4.5**, with reasoning effort set to High.
- 🧬 Datasets:
- `TeichAI/claude-4.5-opus-high-reasoning-250x`
- 🏗 Base Model:
- `unsloth/GLM-4.7-Flash`
- ⚡ Use cases:
- Coding
- Science... | [] |
deepsweet/Qwen3.5-35B-A3B-MLX-oQ4-FP16 | deepsweet | 2026-04-16T18:57:19Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3_5_moe",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen3.5-35B-A3B",
"base_model:quantized:Qwen/Qwen3.5-35B-A3B",
"license:apache-2.0",
"4-bit",
"region:us"
] | text-generation | 2026-04-16T17:30:50Z | This model was converted to MLX format from [Qwen/Qwen3.5-35B-A3B](https://huggingface.co/Qwen/Qwen3.5-35B-A3B) using:
- [oMLX v0.36.0](https://github.com/jundot/omlx/releases/tag/v0.3.6)
- specific optimization for M1/M2 Apple Silicon performance, see [jundot/omlx/issues/604](https://github.com/jundot/omlx/issues/604... | [] |
nkkbr/whisper-large-v3-zatoichi-ja-EX-2 | nkkbr | 2025-12-12T00:52:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ja",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-12-11T23:29:00Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large v3 - Japanese Zatoichi ASR
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/o... | [] |
Gideon531/MiniCPM4.1-8B-Q3_K_L-GGUF | Gideon531 | 2025-09-08T12:17:16Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zh",
"en",
"base_model:openbmb/MiniCPM4.1-8B",
"base_model:quantized:openbmb/MiniCPM4.1-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-09-08T12:16:54Z | # Gideon531/MiniCPM4.1-8B-Q3_K_L-GGUF
This model was converted to GGUF format from [`openbmb/MiniCPM4.1-8B`](https://huggingface.co/openbmb/MiniCPM4.1-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.c... | [] |
Saraquel/AufhebenAdapter | Saraquel | 2026-04-07T06:49:38Z | 0 | 0 | null | [
"dataset:Anthropic/hh-rlhf",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:apache-2.0",
"region:us"
] | null | 2026-04-05T04:25:36Z | Repository: `Saraquel/AufhebenAdapter`
The Aufheben Adaptor is a formal implementation of a dialectical attention mechanism. It translates Hegelian sublation (*Aufheben*) into a measurable vector space. By running two mathematically incongruent attention heads over the same value space, the adaptor forces the base mod... | [] |
phanerozoic/threshold-carryselect-adder | phanerozoic | 2026-01-24T01:52:27Z | 0 | 0 | null | [
"safetensors",
"pytorch",
"threshold-logic",
"neuromorphic",
"arithmetic",
"adder",
"license:mit",
"region:us"
] | null | 2026-01-24T01:52:27Z | # threshold-carryselect-adder
4-bit carry-select adder as threshold circuit. Faster than ripple-carry by pre-computing results for both possible carry values.
## Circuit
```
A[3:0] ──┐
B[3:0] ──┼──► CSel Adder ──┬──► S[3:0]
Cin ──┘ └──► Cout
```
## How It Works
```
Block 0 (bits 0-1): Ripple-car... | [] |
AmpereComputing/granite-8b-code-instruct-128k-gguf | AmpereComputing | 2026-01-13T16:47:59Z | 18 | 1 | null | [
"gguf",
"base_model:ibm-granite/granite-8b-code-instruct-128k",
"base_model:quantized:ibm-granite/granite-8b-code-instruct-128k",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-13T16:39:55Z | 
# Ampere® optimized llama.cpp

... | [] |
Z-Jafari/bert-fa-base-uncased-finetuned-deduplicate_PersianQuAD | Z-Jafari | 2025-12-19T14:38:41Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"fa",
"dataset:Z-Jafari/deduplicated_PersianQuAD",
"base_model:HooshvareLab/bert-fa-base-uncased",
"base_model:finetune:HooshvareLab/bert-fa-base-uncased",
"license:apache-2.0",
"endpoints_com... | question-answering | 2025-12-19T14:22:24Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fa-base-uncased-finetuned-deduplicate_PersianQuAD
This model is a fine-tuned version of [HooshvareLab/bert-fa-base-uncased](... | [] |
Guardrium/spicy-motivator-dpo | Guardrium | 2025-12-09T03:37:33Z | 0 | 0 | peft | [
"peft",
"safetensors",
"dpo",
"llama",
"korean",
"sarcasm",
"lora",
"ko",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:apache-2.0",
"region:us"
] | null | 2025-12-09T03:37:20Z | # Spicy Motivator - DPO
한국어 명언을 비꼬는 문장으로 변환하는 모델 (DPO로 학습)
## 모델 설명
- **Base Model**: meta-llama/Llama-3.1-8B
- **학습 방법**: Direct Preference Optimization (DPO)
- **LoRA**: r=16, alpha=32
## 사용법
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
# Base 모델 ... | [] |
ludde73865/5a95cc74-2baa-4886-a8cc-bf8e1ac905b2 | ludde73865 | 2026-03-04T10:11:17Z | 28 | 0 | lerobot | [
"lerobot",
"safetensors",
"pi0",
"robotics",
"dataset:LeRobotChild/my_robot_dataset_v1.19",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-04T10:09:13Z | # Model Card for pi0
<!-- Provide a quick summary of what the model is/does. -->
**π₀ (Pi0)**
π₀ is a Vision-Language-Action model for general robot control, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀ represents a breakthrough ... | [] |
MarkRedeman/smolvla_policy | MarkRedeman | 2025-09-07T01:46:49Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:MarkRedeman/record-put-three-dice-in-cup",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-09-07T01:46:32Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
azizbekabdullaev/gemma-3-4b-it | azizbekabdullaev | 2025-10-12T20:24:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"unsloth",
"conversational",
"arxiv:1905.07830",
"arxiv:1905.10044",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1705.03551",
"arxiv:1911.01547",
"arxiv:1907.10641",
"arxiv:1903.00161",
"arxiv:2009.03300",
"arxiv:2304.06... | image-text-to-text | 2025-10-12T19:56:22Z | # Gemma 3 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core)
**Resources and Technical Documentation**:
* [Gemma 3 Technical Report][g3-tech-report]
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma3]
**Terms ... | [] |
PracticalWork/xlm-roberta-large-classifier-prompted | PracticalWork | 2025-10-15T08:55:38Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:PracticalWork/xlm-roberta-large-classifier",
"base_model:finetune:PracticalWork/xlm-roberta-large-classifier",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us... | text-classification | 2025-10-15T08:54:22Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-classifier-classifier-prompted
This model is a fine-tuned version of [PracticalWork/xlm-roberta-large-classifie... | [] |
OpenTSLM/gemma-3-1b-pt-ecg-sp | OpenTSLM | 2025-12-06T12:05:35Z | 0 | 0 | null | [
"arxiv:2510.02410",
"region:us"
] | null | 2025-09-21T15:28:35Z | # OpenTSLM/gemma-3-1b-pt-ecg-sp
This model is part of the OpenTSLM project and was trained on ECG Question Answering using Gemma 3 1B as the base language model with Soft Prompt architecture.
## Paper
For details, please refer to our publication:
**OpenTSLM: Time-Series Language Models for Reasoning over Multivaria... | [] |
WindyWord/translate-tc-big-gmq-itc | WindyWord | 2026-04-20T13:35:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"north-germanic",
"swedish",
"danish",
"norwegian",
"icelandic",
"faroese",
"italic",
"italian",
"spanish",
"portuguese",
"french",
"catalan",
"romanian",
"gmq",
"itc",
"license:cc-by-4.0",
"endpoints_comp... | translation | 2026-04-20T12:53:08Z | # WindyWord.ai Translation — North Germanic → Italic
**Translates North Germanic (Swedish, Danish, Norwegian, Icelandic, Faroese) → Italic (Italian, Spanish, Portuguese, French, Catalan, Romanian).**
**Quality Rating: ⭐⭐½ (2.5★ Basic)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ pro... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.