modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
berkde/business-news-generator | berkde | 2026-01-29T01:24:20Z | 4 | 0 | peft | [
"peft",
"safetensors",
"llama",
"text-generation",
"base_model:adapter:HuggingFaceTB/SmolLM-135M",
"lora",
"transformers",
"base_model:HuggingFaceTB/SmolLM-135M",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-28T15:44:17Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# business-news-generator
This model is a fine-tuned version of [HuggingFaceTB/SmolLM-135M](https://huggingface.co/HuggingFaceTB/Sm... | [] |
compellit/byt5-scansion-gl-sg | compellit | 2026-03-30T18:10:37Z | 30 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"gl",
"base_model:google/byt5-small",
"base_model:finetune:google/byt5-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2026-03-29T17:35:56Z | # Model Card for byt5-scan-gl-sg
Metrical scansion in Galician (lexical to metrical syllabification). Fine-tuned byT5.
Operates on a single line (without addidtional context lines, unlike the models ending with *-cx* in this collection.
Input format: `E / os / *her- / mos / re- / ver- / *de- / cen / do / es- / *pri-... | [] |
Intellexus/gemma2-2b-bo-10k-4096 | Intellexus | 2026-01-12T15:52:33Z | 0 | 0 | null | [
"safetensors",
"gemma2",
"gemma2-2b",
"vocabulary-expansion",
"low-resource",
"lora",
"bo",
"en",
"arxiv:2408.00118",
"arxiv:2205.12654",
"arxiv:2207.04672",
"base_model:google/gemma-2-2b",
"base_model:adapter:google/gemma-2-2b",
"license:cc-by-4.0",
"region:us"
] | null | 2026-01-12T15:45:42Z | # gemma2-2b-bo-10k-4096
This model is a vocabulary-expanded version of `gemma2-2b` for **Tibetan**.
## Training Details
| Parameter | Value |
|-----------|-------|
| Base Model | gemma2-2b |
| Target Language | Tibetan |
| Training Samples | 10,000 |
| Added Tokens | 4096 |
## Method
1. **Stage 1**: Initialize new... | [] |
imgailab/flux1-trtx-dev-fp4-blackwell | imgailab | 2025-08-12T02:37:42Z | 4 | 1 | tensorrt-rtx | [
"tensorrt-rtx",
"flux1-dev",
"flux1",
"fp4",
"dev",
"optimized",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:apache-2.0",
"region:us"
] | null | 2025-08-12T02:37:39Z | # FLUX1 TensorRT-RTX: DEV-Fp4 🔨 Building
Optimized TensorRT-RTX engines for **FLUX1** on **Fp4** architecture with **DEV** quantization.
## 🎯 This Repository
**One variant, one download** - only get exactly what you need!
- **Model**: FLUX1
- **Architecture**: Fp4 (Compute Capability 8.0+)
- **Quantization**: D... | [] |
thrnn/ppo-gpt2-medium-prefix-backup | thrnn | 2026-02-28T12:09:52Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | 2026-02-27T07:04:21Z | # TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL... | [] |
rocky1410/haipai-micro | rocky1410 | 2026-01-13T18:17:22Z | 5 | 0 | null | [
"safetensors",
"haipai",
"pytorch",
"causal-lm",
"text-generation",
"custom-architecture",
"micro",
"GPT",
"gpt",
"GQA",
"Factorized Embeddings",
"tiny-llm",
"smol",
"55m",
"pretraining",
"synthetic-data",
"education",
"math",
"code",
"low-resource",
"en",
"custom_code",
... | text-generation | 2025-12-31T10:43:27Z | # Haipai-micro
**Haipai-micro** is a "Micro-LLM" designed to test the limits of parameter efficiency. Despite having only **55 Million parameters** (roughly 1/2 the size of GPT-2 Small), it achieves surprising performance on common sense and reasoning benchmarks by utilizing a high-density dataset mix.
This is a **Ba... | [] |
mosshi/gpt2-finetuned-ja | mosshi | 2025-09-26T04:35:58Z | 0 | 0 | null | [
"safetensors",
"gpt2",
"region:us"
] | null | 2025-09-13T16:16:51Z | # gpt2-finetuned-ja
## モデル概要
GPT-2 small(約124Mパラメータ)をベースに、日本語の短いテキストを使って小規模に再学習(継続事前学習)しました。
これは **学習用のサンプルモデル** であり、実用レベルの性能はありません。
---
## 学習データ
- 青空文庫(著作権が切れた作品を一部使用)
---
## 目的と用途
- 個人が **LLM の継続事前学習プロセスを体験するため**
- 初学者が **Google Colab 上で動作確認できるモデル**
---
## 学習環境
- Google Colab (GPU: T4 を使用)
- 学習時間: 約2... | [] |
DhruvJalan/rl_course_vizdoom_health_gathering_supreme | DhruvJalan | 2025-12-20T11:51:11Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-12-19T21:18:03Z | A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sam... | [
{
"start": 7,
"end": 11,
"text": "APPO",
"label": "training method",
"score": 0.8318852782249451
},
{
"start": 635,
"end": 639,
"text": "APPO",
"label": "training method",
"score": 0.8054978847503662
},
{
"start": 713,
"end": 755,
"text": "rl_course_vizdoo... |
kaimoonstar/SD1.5_imageNet | kaimoonstar | 2025-11-10T08:32:27Z | 0 | 1 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"imagenet",
"blip",
"text-to-image",
"en",
"dataset:ILSVRC/imagenet-1k",
"base_model:rupeshs/LCM-runwayml-stable-diffusion-v1-5",
"base_model:finetune:rupeshs/LCM-runwayml-stable-diffusion-v1-5",
"region:us"
] | text-to-image | 2025-10-16T00:38:07Z | ---
datasets:
- ILSVRC/imagenet-1k
base_model:
- rupeshs/LCM-runwayml-stable-diffusion-v1-5
pipeline_tag: text-to-image
tags:
- stable-diffusion-1-5
- imagenet
- blip
---
# [SD 1.5 - ImageNet-BLIP-Finetune]
这是一个在 **ImageNet** 数据集上微调的 **Stable Diffusion 1.5** 模型。
与常规微调不同的是,本模型的训练**没有**使用 ImageNet 原始的单一类别标签(例如 "tench"... | [
{
"start": 162,
"end": 166,
"text": "blip",
"label": "training method",
"score": 0.7869033217430115
},
{
"start": 345,
"end": 349,
"text": "BLIP",
"label": "training method",
"score": 0.78493732213974
}
] |
iamshnoo/combined_no_asia_with_metadata_1b_step8k | iamshnoo | 2026-04-02T14:46:02Z | 282 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"metadata-localization",
"leave-one-out",
"1b",
"with-metadata",
"pretraining",
"intermediate-checkpoint",
"arxiv:2601.15236",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-01T20:58:19Z | # combined_no_asia_with_metadata_1b_step8k
## Summary
This repo contains the leave out asia 1b step8k model exported from the 8k checkpoint for the metadata localization project. It was trained from scratch on the project corpus, using the Llama 3.2 tokenizer and vocabulary.
## Variant Metadata
- Stage: `pretrain`
... | [] |
DarshanM0di/fireandsmoke | DarshanM0di | 2026-02-20T07:04:17Z | 0 | 0 | null | [
"onnx",
"yolo26n",
"object-detection",
"en",
"license:mit",
"region:us"
] | object-detection | 2026-02-18T06:20:52Z | 🔥 Fire & Smoke Detection — YOLO‑26n (ONNX)
📝 Overview
This model is a fire and smoke detection system built using YOLO‑26n (YOLO‑NAS) and exported to ONNX format for fast, lightweight deployment. It is trained for 100 epochs on a custom dataset containing annotated fire and smoke images. The model is optimized for C... | [
{
"start": 1811,
"end": 1815,
"text": "ONNX",
"label": "training method",
"score": 0.7442783117294312
}
] |
aru2908/qwen2-audio-7B-content | aru2908 | 2025-09-24T20:38:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2-Audio-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-Audio-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T12:56:05Z | # Model Card for qwen2-audio-7B-content
This model is a fine-tuned version of [Qwen/Qwen2-Audio-7B-Instruct](https://huggingface.co/Qwen/Qwen2-Audio-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a ... | [] |
uralstech/AIDE-Chip-Surrogates | uralstech | 2026-04-11T15:26:50Z | 0 | 0 | xgboost | [
"xgboost",
"computer-architecture",
"gem5",
"cache",
"surrogate-model",
"explainable-ai",
"shap",
"monotonic-constraints",
"systems-ml",
"tabular-regression",
"dataset:uralstech/AIDE-Chip-15K-gem5-Sims",
"doi:10.57967/hf/7539",
"license:cc-by-nc-sa-4.0",
"region:us"
] | tabular-regression | 2026-01-15T12:47:02Z | # AIDE Chip Surrogates
This is a collection of physics-aware, monotonicity-constrained XGBoost models that replace expensive gem5 cache simulations during design-space exploration.
Each model predicts either IPC or L2 miss rate for a specific workload, using only cache configuration parameters as input. The models ar... | [] |
mlx-community/gemma-4-31b-it-5bit | mlx-community | 2026-04-13T13:10:20Z | 1,881 | 1 | mlx | [
"mlx",
"safetensors",
"gemma4",
"image-text-to-text",
"conversational",
"license:apache-2.0",
"5-bit",
"region:us"
] | image-text-to-text | 2026-04-02T16:57:20Z | # mlx-community/gemma-4-31b-it-5bit
This model was converted to MLX format from [`google/gemma-4-31b-it`](https://huggingface.co/google/gemma-4-31b-it)
using mlx-vlm version **0.4.3**.
Refer to the [original model card](https://huggingface.co/google/gemma-4-31b-it) for more details on the model.
## Use with mlx
```b... | [] |
ntAnh-dev/paraphrase-multilingual-MiniLM-L12-v2-bk-quyche | ntAnh-dev | 2025-12-26T10:00:03Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:4485",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12... | sentence-similarity | 2025-12-26T09:59:53Z | # SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2). It ... | [] |
weirek/Affine-0213-5FCS6P3tnQ8ojoXbstfWoJbH2853Ygn5A1DYJ5M7QLnvaKJt | weirek | 2026-02-14T16:44:40Z | 9 | 0 | null | [
"safetensors",
"minimax_m2",
"pytorch",
"causal-lm",
"text-generation",
"conversational",
"license:apache-2.0",
"4-bit",
"region:us"
] | text-generation | 2026-02-13T16:11:19Z | # Affine-0213-5FCS6P3tnQ8ojoXbstfWoJbH2853Ygn5A1DYJ5M7QLnvaKJt
This model has been fine-tuned for conversational AI tasks.
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"weirek/Affine-0213-5FCS6P3tnQ8ojoXbstfWoJbH2853Ygn5A1DYJ5M7QLn... | [] |
qqceqqq/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled | qqceqqq | 2026-03-30T01:12:50Z | 11 | 0 | null | [
"safetensors",
"qwen3_5",
"unsloth",
"qwen",
"qwen3.5",
"reasoning",
"chain-of-thought",
"Dense",
"image-text-to-text",
"conversational",
"en",
"zh",
"dataset:nohurry/Opus-4.6-Reasoning-3000x-filtered",
"dataset:Jackrong/Qwen3.5-reasoning-700x",
"base_model:Qwen/Qwen3.5-27B",
"base_mod... | image-text-to-text | 2026-03-30T01:12:50Z | # 🌟 Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled
> **Build Environment Upgrades:**
> - **Fine-tuning Framework**: **Unsloth 2026.3.3**
> - **Core Dependencies**: **Transformers 5.2.0**
> - This model fixes the crash in the official model caused by the Jinja template not supporting the **"developer"** role. (commo... | [] |
Yysrc/Mantis-Base | Yysrc | 2025-12-03T11:56:37Z | 334 | 0 | transformers | [
"transformers",
"safetensors",
"mantis",
"robotics",
"arxiv:2511.16175",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | robotics | 2025-11-18T15:19:29Z | # Mantis
> This is the official checkpoint of **Mantis: A Versatile Vision-Language-Action Model
with Disentangled Visual Foresight**
- **Paper:** https://arxiv.org/pdf/2511.16175
- **Code:** https://github.com/zhijie-group/Mantis
### 🔥 Highlights
- **Disentangled Visual Foresight** augments action learning without... | [
{
"start": 352,
"end": 372,
"text": "Progressive Training",
"label": "training method",
"score": 0.9291231632232666
}
] |
HuggingFaceFW/finepdfs_edu_classifier_frp_Latn | HuggingFaceFW | 2025-10-06T05:50:07Z | 7 | 0 | null | [
"safetensors",
"modernbert",
"fr",
"dataset:HuggingFaceFW/finepdfs_fw_edu_labeled",
"license:apache-2.0",
"region:us"
] | null | 2025-10-06T05:49:53Z | ---
language:
- fr
license: apache-2.0
datasets:
- HuggingFaceFW/finepdfs_fw_edu_labeled
---
# FinePDFs-Edu classifier (frp_Latn)
## Model summary
This is a classifier for judging the educational value of web pages. It was developed to filter and curate educational content from web datasets and was trained on 323456 ... | [] |
jekunz/Qwen3-1.7B-Base-sv-CPT-plus-IR-sv-SmolTalk | jekunz | 2026-04-24T08:27:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-24T08:26:27Z | # Model Card for qwen-sv10m-merged-sv-smoltalk
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the pa... | [] |
hubnemo/libero-lora-lora-test | hubnemo | 2025-12-10T01:49:23Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:HuggingFaceVLA/libero",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-04T12:39:37Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
mradermacher/Starlit-Shadow-12B-GGUF | mradermacher | 2025-12-21T21:46:49Z | 41 | 2 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"roleplay",
"en",
"base_model:Vortex5/Starlit-Shadow-12B",
"base_model:quantized:Vortex5/Starlit-Shadow-12B",
"endpoints_compatible",
"region:us"
] | null | 2025-12-21T11:19:43Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
cjiao/golden-goose-qwen2.5-1.5b-instruct-all | cjiao | 2026-04-15T03:13:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"grpo",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:... | text-generation | 2026-04-14T20:36:35Z | # Model Card for golden-goose-qwen2.5-1.5b-instruct-all
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "I... | [] |
Rarakiyo/dpo-qwen-cot-merged_ra | Rarakiyo | 2026-02-03T10:53:47Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"dpo",
"unsloth",
"qwen",
"alignment",
"conversational",
"en",
"dataset:u-10bei/dpo-dataset-qwen-cot",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"text-gener... | text-generation | 2026-02-03T10:50:35Z | # My-First-DPO-Model-for-Main-Compe
This model is a fine-tuned version of **Qwen/Qwen3-4B-Instruct-2507** using **Direct Preference Optimization (DPO)** via the **Unsloth** library.
This repository contains the **full-merged 16-bit weights**. No adapter loading is required.
## Training Objective
This model has been ... | [
{
"start": 115,
"end": 145,
"text": "Direct Preference Optimization",
"label": "training method",
"score": 0.8788214921951294
},
{
"start": 147,
"end": 150,
"text": "DPO",
"label": "training method",
"score": 0.8821470737457275
},
{
"start": 336,
"end": 339,
... |
UsefulSensors/moonshine-tiny-ar | UsefulSensors | 2025-09-06T01:16:10Z | 573 | 6 | transformers | [
"transformers",
"safetensors",
"moonshine",
"automatic-speech-recognition",
"ar",
"arxiv:2509.02523",
"arxiv:1810.03993",
"license:other",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-09-01T17:05:15Z | # Moonshine
This model is part of the Moonshine family of tiny specialized Automatic Speech Recognition (ASR) models for edge devices, as described in [Flavors of Moonshine: Tiny Specialized ASR Models for Edge Devices](https://huggingface.co/papers/2509.02523).
[[Paper]](https://huggingface.co/papers/2509.02523) | [... | [] |
ArkMaster123/qwen2.5-7b-therapist | ArkMaster123 | 2025-12-13T00:32:35Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"fine-tuned",
"therapy",
"counseling",
"mental-health",
"qwen",
"lora",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-12-12T20:49:26Z | # Qwen2.5-7B-Instruct Therapist
This is a fine-tuned version of Qwen/Qwen2.5-7B-Instruct, specifically trained for therapeutic conversations.
## Model Details
- **Base Model**: Qwen/Qwen2.5-7B-Instruct
- **Fine-tuning Method**: LoRA (Low-Rank Adaptation)
- **Training Dataset**: Jyz1331/therapist_conversations + Safe... | [
{
"start": 231,
"end": 235,
"text": "LoRA",
"label": "training method",
"score": 0.7178220748901367
}
] |
ToPo-ToPo/ai-character-suuchi-kai-3.6b | ToPo-ToPo | 2025-12-16T13:51:14Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"text-generation",
"base_model:adapter:ToPo-ToPo/rinna-japanese-gpt-neox-3.6b-lora-sft-v1",
"lora",
"transformers",
"ja",
"dataset:ToPo-ToPo/ai-characters-QA",
"base_model:ToPo-ToPo/rinna-japanese-gpt-neox-3.6b-lora-sft-v1",
"text-generation-inference",
"endp... | text-generation | 2025-12-16T13:24:24Z | # 概要
AIキャラクターの「数値カイ」のモデルです。キャラクター性をLoRAで学習させています。
# モデルの使い方
CHARACTER_SYSTEM_PROMPT含めて学習しているため、必ず入力が必要です。
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
#====================================================================
# 設定
#=====================================================... | [] |
Synaptics/sr100_person_classification_256x448 | Synaptics | 2025-09-27T20:48:45Z | 13 | 0 | tflite | [
"tflite",
"Astra SR",
"SR100",
"MCU",
"Person Classification",
"image-classification",
"license:apache-2.0",
"region:us"
] | image-classification | 2025-08-18T22:32:57Z | # Person Classification 256x448 (SR100 Series)
## Model Overview
The **Person Classification 256x448** model, developed by Synaptics, is a lightweight quantized `tflite` model developed for the **SR100 processor** in the Synaptics Astra™ SR MCU Series.
It efficiently classifies input images as either **person** or... | [] |
FractalSurfer/TimeCapsuleLLM-v2-1800-1875-mlx-fp16 | FractalSurfer | 2026-01-12T16:49:41Z | 26 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"historical",
"causal-lm",
"mlx",
"mlx-my-repo",
"en",
"base_model:haykgrigorian/TimeCapsuleLLM-v2-1800-1875",
"base_model:finetune:haykgrigorian/TimeCapsuleLLM-v2-1800-1875",
"license:mit",
"text-generation-inference",
"endpoints_... | text-generation | 2026-01-12T16:49:27Z | # FractalSurfer/TimeCapsuleLLM-v2-1800-1875-mlx-fp16
The Model [FractalSurfer/TimeCapsuleLLM-v2-1800-1875-mlx-fp16](https://huggingface.co/FractalSurfer/TimeCapsuleLLM-v2-1800-1875-mlx-fp16) was converted to MLX format from [haykgrigorian/TimeCapsuleLLM-v2-1800-1875](https://huggingface.co/haykgrigorian/TimeCapsuleLLM... | [] |
haongn/Qwen2.5-0.5B-dkkd-v2-1 | haongn | 2026-03-09T10:49:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"unsloth",
"trl",
"endpoints_compatible",
"region:us"
] | null | 2026-03-09T04:55:22Z | # Model Card for Qwen2.5-0.5B-dkkd-v2-1
This model is a fine-tuned version of [unsloth/qwen2.5-0.5b-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen2.5-0.5b-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
questio... | [] |
nm-testing/Qwen3-VL-235B-A22B-Instruct-FP8-BLOCK | nm-testing | 2025-10-27T14:11:45Z | 0 | 0 | null | [
"fp8",
"quantized",
"llm-compressor",
"compressed-tensors",
"red hat",
"text-generation",
"base_model:Qwen/Qwen3-VL-235B-A22B-Instruct",
"base_model:finetune:Qwen/Qwen3-VL-235B-A22B-Instruct",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-10-12T17:22:05Z | # Qwen3-VL-235B-A22B-Instruct-FP8-BLOCK
## Model Overview
- **Model Architecture:** Qwen3VLMoeForConditionalGeneration
- **Input:** Text, Image
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** FP8
- **Activation quantization:** FP8
- **Release Date:**
- **Version:** 1.0
- **Model Devel... | [] |
WindyWord/translate-loz-fi | WindyWord | 2026-04-28T00:00:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"lozi",
"finnish",
"loz",
"fi",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | translation | 2026-04-18T04:42:01Z | # WindyWord.ai Translation — Lozi → Finnish
**Translates Lozi → Finnish.**
**Quality Rating: ⭐⭐½ (2.5★ Basic)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs.
## Quality & Pricing Tier
- **5-star rating:** 2.5★ ⭐⭐½
- **Tier:** Basic
- **Composite score:** 5... | [] |
Ruggero1912/Patch-ioner_talk2dino_decap_COCO_Captions | Ruggero1912 | 2025-10-14T06:56:39Z | 5 | 0 | transformers | [
"transformers",
"patchioner",
"feature-extraction",
"vision",
"image-to-text",
"image-captioning",
"zero-shot",
"dense-captioning",
"patch-ioner",
"custom_code",
"arxiv:2510.02898",
"license:apache-2.0",
"model-index",
"region:us"
] | image-to-text | 2025-10-07T14:00:23Z | # Patch-ioner_talk2dino_decap_COCO_Captions - Patch-ioner Configuration
This repository contains a pre-trained DECAP model from the **Patch-ioner** framework for dense image captioning and controllable visual description.
## 📝 Paper Information
**Title**: "One Patch to Caption Them All: A Unified Zero-Shot Captioni... | [] |
nightmedia/huizimao-gpt-oss-20b-uncensored-mxfp4-q8-hi-mlx | nightmedia | 2025-08-10T12:09:50Z | 293 | 2 | mlx | [
"mlx",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"base_model:huizimao/gpt-oss-20b-uncensored-mxfp4",
"base_model:quantized:huizimao/gpt-oss-20b-uncensored-mxfp4",
"license:apache-2.0",
"8-bit",
"region:us"
] | text-generation | 2025-08-10T02:00:25Z | # huizimao-gpt-oss-20b-uncensored-mxfp4-q8-hi-mlx
This model contains config and template fixes by Unsloth
This model [huizimao-gpt-oss-20b-uncensored-mxfp4-q8-hi-mlx](https://huggingface.co/huizimao-gpt-oss-20b-uncensored-mxfp4-q8-hi-mlx) was
converted to MLX format from [huizimao/gpt-oss-20b-uncensored-mxfp4](https... | [] |
mradermacher/Gemma-4-Queen-31B-it-uncensored-heretic-i1-GGUF | mradermacher | 2026-04-19T13:57:10Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"heretic",
"uncensored",
"decensored",
"abliterated",
"ara",
"roleplay",
"gemma",
"gemma4",
"sillytavern",
"idol",
"pytorch",
"DarkIdol",
"Queen",
"any-to-any",
"OpenClaw",
"en",
"base_model:llmfan46/Gemma-4-Queen-31B-it-uncensored-heretic",
"base_model:... | any-to-any | 2026-04-19T11:39:32Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
ErenAta00/Morpheus-LLM-14B-Virtual-Reality-Model | ErenAta00 | 2026-01-25T22:13:08Z | 75 | 3 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"unity",
"xr",
"virtual-reality",
"augmented-reality",
"mixed-reality",
"csharp",
"game-development",
"morpheus",
"unsloth",
"conversational",
"en",
"tr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-25T19:45:09Z | <div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/672b6a9a5cf8fcb442436a7a/bg6q9Rvr6Ob3jk-uRaLE5.png" width="100%" alt="Morpheus Banner"/>
</div>
<div align="center">
<img src="https://img.shields.io/badge/Parameters-14B-4285F4?style=for-the-badge&logo=huggingface&logoColor=whi... | [] |
nomic-ai/CodeRankLLM | nomic-ai | 2025-06-24T02:00:23Z | 2,699 | 21 | null | [
"safetensors",
"qwen2",
"arxiv:2412.01007",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:mit",
"region:us"
] | null | 2024-11-08T23:14:04Z | `CodeRankLLM` is a 7B LLM fine-tuned for listwise code-reranking. When combined with performant code retrievers like [`CodeRankEmbed`](https://huggingface.co/cornstack/CodeRankEmbed), it significantly enhances the quality of retrieved results for various code retrieval tasks.
We release the scripts to evaluate our mo... | [
{
"start": 441,
"end": 459,
"text": "listwise reranking",
"label": "training method",
"score": 0.7205555438995361
}
] |
Mitchins/innit-language-detection | Mitchins | 2025-08-27T03:33:52Z | 1 | 0 | pytorch | [
"pytorch",
"onnx",
"safetensors",
"byte_cnn",
"text-classification",
"language-detection",
"byte-level",
"multilingual",
"english-detection",
"cnn",
"dataset:custom",
"license:mit",
"model-index",
"region:us"
] | text-classification | 2025-08-27T03:19:35Z | # innit: Fast English vs Non-English Text Detection
A lightweight byte-level CNN for fast binary language detection (English vs Non-English).
## Model Details
- **Model Type**: Byte-level Convolutional Neural Network
- **Task**: Binary text classification (English vs Non-English)
- **Architecture**: TinyByteCNN_EN w... | [] |
asdf234098fsjk/MyGemmaBotFine | asdf234098fsjk | 2025-08-14T20:37:31Z | 3 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-14T20:32:03Z | # Model Card for MyGemmaBotFine
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but co... | [] |
NastasiaM/mbart_cnn_summarization_model_10K_8ep_3e-5_batch_8_BEST | NastasiaM | 2026-02-27T15:04:44Z | 55 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/mbart-large-50",
"base_model:finetune:facebook/mbart-large-50",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2026-02-27T13:27:03Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart_cnn_summarization_model_10K_8ep_3e-5_batch_8_BEST
This model is a fine-tuned version of [facebook/mbart-large-50](https://h... | [] |
mradermacher/celebrimbor-gpt2-medium-x81-GGUF | mradermacher | 2025-11-11T14:26:25Z | 15 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:stanford-crfm/celebrimbor-gpt2-medium-x81",
"base_model:quantized:stanford-crfm/celebrimbor-gpt2-medium-x81",
"endpoints_compatible",
"region:us"
] | null | 2025-11-11T14:24:16Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
BootesVoid/cmepk1ksc0ajrtlqb2lpgjx6r_cmepl6wdi0al8tlqbyq8y90lp | BootesVoid | 2025-08-24T11:46:10Z | 1 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-08-24T11:46:08Z | # Cmepk1Ksc0Ajrtlqb2Lpgjx6R_Cmepl6Wdi0Al8Tlqbyq8Y90Lp
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https:... | [] |
mradermacher/Qwen3.5_9B_Base-GGUF | mradermacher | 2026-04-24T06:08:20Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Babsie/Qwen3.5_9B_Base",
"base_model:quantized:Babsie/Qwen3.5_9B_Base",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-24T05:05:55Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
danielsanjosepro/ditflow_drawer_without_ft_tact_v2 | danielsanjosepro | 2025-12-18T10:18:43Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"ditflow",
"robotics",
"dataset:LSY-lab/drawer_without_ft_tact_v2",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-18T10:18:24Z | # Model Card for ditflow
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingfac... | [] |
YuhengSSS/llava-v1.5-13b-roi-K15T3-152k-v1bf16Mheads-twiginit | YuhengSSS | 2025-10-06T00:07:34Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llava_llama",
"text-generation",
"image-text-to-text",
"arxiv:2509.16944",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-09-15T09:36:26Z | # llava-v1.5-13b-roi-K15T3-152k-v1bf16Mheads-twiginit
This model is associated with the paper [Catching the Details: Self-Distilled RoI Predictors for Fine-Grained MLLM Perception](https://huggingface.co/papers/2509.16944).
## Introduction
While recent methods leverage a Region-of-Interest (RoI) mechanism to focus on... | [] |
mlx-community/Qwen3-Next-80B-A3B-Instruct-4bit | mlx-community | 2025-09-12T17:12:16Z | 20,462 | 23 | mlx | [
"mlx",
"safetensors",
"qwen3_next",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-Next-80B-A3B-Instruct",
"base_model:quantized:Qwen/Qwen3-Next-80B-A3B-Instruct",
"license:apache-2.0",
"4-bit",
"region:us"
] | text-generation | 2025-09-12T17:00:49Z | # mlx-community/Qwen3-Next-80B-A3B-Instruct-4bit
This model [mlx-community/Qwen3-Next-80B-A3B-Instruct-4bit](https://huggingface.co/mlx-community/Qwen3-Next-80B-A3B-Instruct-4bit) was
converted to MLX format from [Qwen/Qwen3-Next-80B-A3B-Instruct](https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Instruct)
using mlx-lm v... | [] |
contemmcm/c607fe21c3d3354bedff0cb42a0d2765 | contemmcm | 2025-10-27T08:06:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-xl",
"base_model:finetune:google/mt5-xl",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-10-27T07:07:01Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# c607fe21c3d3354bedff0cb42a0d2765
This model is a fine-tuned version of [google/mt5-xl](https://huggingface.co/google/mt5-xl) on t... | [] |
arianaazarbal/qwen3-4b-20260127_191710_lc_rh_sot_base_seed1_beta0.025-9c59d2-step40 | arianaazarbal | 2026-01-27T20:03:39Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-01-27T20:03:03Z | # qwen3-4b-20260127_191710_lc_rh_sot_base_seed1_beta0.025-9c59d2-step40
## Experiment Info
- **Full Experiment Name**: `20260127_191710_leetcode_train_medhard_filtered_rh_simple_overwrite_tests_baseline_seed1_beta0.025`
- **Short Name**: `20260127_191710_lc_rh_sot_base_seed1_beta0.025-9c59d2`
- **Base Model**: `qwen/Q... | [] |
jumelet/gptbert-ron-125steps-base | jumelet | 2025-10-06T00:42:06Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_bert",
"feature-extraction",
"gpt-bert",
"babylm",
"remote-code",
"fill-mask",
"custom_code",
"license:other",
"region:us"
] | fill-mask | 2025-10-06T00:41:49Z | # jumelet/gptbert-ron-125steps-base
GPT-BERT style BabyBabyLLM model for language **ron**.
This repository may include both *main* and *EMA* variants.
**Default variant exposed to generic loaders:** `ema`
## Variants Available
ema, main
## Files
- model.safetensors (alias of default variant)
- model_ema.safetensor... | [] |
JiaMinEsc/act_stack-3-cube | JiaMinEsc | 2026-03-18T20:02:43Z | 32 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:JiaMinEsc/stack-3-cube",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-18T18:26:13Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
AmberYifan/Qwen2.5-14B-Instruct-ultrafeedback-drift-iter1-RPO | AmberYifan | 2025-08-05T22:00:12Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"... | text-generation | 2025-08-05T20:49:06Z | # Model Card for Qwen2.5-14B-Instruct-ultrafeedback-drift-iter1-RPO
This model is a fine-tuned version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
que... | [
{
"start": 213,
"end": 216,
"text": "TRL",
"label": "training method",
"score": 0.8114952445030212
},
{
"start": 987,
"end": 990,
"text": "DPO",
"label": "training method",
"score": 0.8456188440322876
},
{
"start": 1277,
"end": 1280,
"text": "DPO",
"la... |
shy888/act_so101_put_green_square_policy | shy888 | 2025-12-03T13:05:34Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:local_dataset",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-03T13:05:10Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
hugimagi/DeepSeek-V4-Pro | hugimagi | 2026-05-04T15:24:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"deepseek_v4",
"text-generation",
"license:mit",
"endpoints_compatible",
"8-bit",
"fp8",
"region:us"
] | text-generation | 2026-05-04T15:24:51Z | # DeepSeek-V4: Towards Highly Efficient Million-Token Context Intelligence
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" w... | [] |
xspadex/llama-factory | xspadex | 2025-09-19T03:02:29Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compat... | image-text-to-text | 2025-08-12T16:26:27Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft_0910_4_3500
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Ins... | [] |
Brooooooklyn/Qwen3.5-35B-A3B-UD-Q6_K_XL-mlx | Brooooooklyn | 2026-03-29T14:45:06Z | 0 | 1 | mlx-node | [
"mlx-node",
"safetensors",
"qwen3_5_moe",
"mlx",
"quantized",
"awq",
"6-bit",
"qwen3.5",
"moe",
"hybrid-attention",
"gated-delta-net",
"apple-silicon",
"unsloth-dynamic",
"text-generation",
"conversational",
"en",
"zh",
"base_model:Qwen/Qwen3.5-35B-A3B",
"base_model:quantized:Qwe... | text-generation | 2026-03-29T14:41:55Z | # Qwen3.5-35B-A3B — UD-Q6_K_XL (mlx-node)
6-bit base mixed-precision quantization of [Qwen/Qwen3.5-35B-A3B](https://huggingface.co/Qwen/Qwen3.5-35B-A3B) for Apple Silicon, using the [**Unsloth Dynamic** quantization strategy](https://unsloth.ai/docs/models/qwen3.5/gguf-benchmarks) via [mlx-node](https://github.com/mlx... | [] |
Sumitwarrior7/sample-grpo-openenv | Sumitwarrior7 | 2026-04-25T23:02:46Z | 0 | 0 | null | [
"region:us"
] | null | 2026-04-25T22:31:03Z | # Fraud Detection RL Environment
An OpenEnv-compatible **multi-agent** RL environment for training and evaluating
fraud detection policies.
Two agents interact every episode:
- **Defender** (LLM / PPO) — learns to detect and block fraudulent activity.
- **Fraudster** (LLM / PPO) — acts as an adaptive adversary trying... | [
{
"start": 1385,
"end": 1397,
"text": "mule_cashout",
"label": "training method",
"score": 0.7717376947402954
}
] |
AIFunOver/Llama-3.2-1B-Instruct-openvino-4bit | AIFunOver | 2024-11-07T16:43:45Z | 2 | 1 | transformers | [
"transformers",
"openvino",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"nncf",
"4-bit",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-1B-In... | text-generation | 2024-11-07T16:40:55Z | This model is a quantized version of [`meta-llama/Llama-3.2-1B-Instruct`](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) and is converted to the OpenVINO format. This model was obtained via the [nncf-quantization](https://huggingface.co/spaces/echarlaix/nncf-quantization) space with [optimum-intel](https://gi... | [] |
Haker18/Llama-Gemma-Hybrid-9B | Haker18 | 2026-02-07T11:26:59Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"llama-3.1",
"gemma-2",
"arabic",
"roleplay",
"en",
"ar",
"base_model:google/gemma-2-9b",
"base_model:merge:google/gemma-2-9b",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:merge:meta-llama/Llama-3.1-8... | text-generation | 2026-02-07T10:49:57Z | # merge_result
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the Passthrough merge method.
### Models Merged
The following models were included in the merge:
* [meta-llama/Llama-3.1-8B](https... | [
{
"start": 192,
"end": 216,
"text": "Passthrough merge method",
"label": "training method",
"score": 0.789554238319397
}
] |
huchukato/stemify-desktop | huchukato | 2026-02-08T03:02:00Z | 0 | 0 | stemify | [
"stemify",
"audio",
"music",
"stem-separation",
"demucs",
"desktop-app",
"audio-processing",
"ai",
"machine-learning",
"en",
"it",
"license:mit",
"region:us"
] | null | 2026-02-08T02:39:16Z | # 🎵 Stemify Desktop - The Audio Splitter

## 📝 Description
Stemify Desktop is a professional desktop application for AI-powered audio stem separation. Built with Facebook Research's Demucs model, it allows users to separate audi... | [] |
xummer/mistral-7b-belebele-lora-hau-latn | xummer | 2026-03-10T20:57:03Z | 13 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"llama-factory",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"license:other",
"region:us"
] | text-generation | 2026-03-10T20:56:43Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# belebele_hau_Latn
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mis... | [] |
AntoineChatry/mistral-7b-python | AntoineChatry | 2026-03-04T21:35:48Z | 184 | 0 | null | [
"gguf",
"mistral",
"llama.cpp",
"unsloth",
"python",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:quantized:mistralai/Mistral-7B-Instruct-v0.3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-04T21:00:44Z | # mistral-7b-python-gguf
Conversational Python fine-tune of Mistral 7B exported to GGUF format for local inference.
- Base model: Mistral 7B
- Fine-tuning framework: Unsloth
- Format: GGUF
- Author: AntoineChatry
---
# ⚠️ Disclaimer
This is an **early experimental fine-tune**.
It is **not production-ready**, not ... | [] |
BEKOBE/your-lora-repo | BEKOBE | 2026-02-06T02:00:52Z | 1 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-02T16:07:03Z | qwen3-4b-structured-output-lora-nonofficial-ver2
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is train... | [
{
"start": 150,
"end": 155,
"text": "QLoRA",
"label": "training method",
"score": 0.8041362762451172
},
{
"start": 591,
"end": 596,
"text": "QLoRA",
"label": "training method",
"score": 0.7069675922393799
}
] |
gokulsrinivasagan/whisper-base.en-fsc-v1 | gokulsrinivasagan | 2025-10-04T05:54:32Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"audio-classification",
"generated_from_trainer",
"base_model:openai/whisper-base.en",
"base_model:finetune:openai/whisper-base.en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2025-10-04T01:58:45Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-base.en-fsc-v1
This model is a fine-tuned version of [openai/whisper-base.en](https://huggingface.co/openai/whisper-base.... | [] |
clarin-pl/combo-nlp-xlm-roberta-base-turkish-framenet-ud2.17 | clarin-pl | 2026-04-10T13:00:48Z | 0 | 0 | null | [
"pytorch",
"dependency-parsing",
"combo",
"universal-dependencies",
"token-classification",
"tr",
"dataset:universal_dependencies",
"license:cc-by-sa-4.0",
"region:us"
] | token-classification | 2026-04-10T12:51:53Z | # COMBO-NLP Model for Turkish
## Model Description
This is a Turkish-language model based on [COMBO-NLP](https://gitlab.clarin-pl.eu/syntactic-tools/combo-nlp), an open-source natural language preprocessing system. It performs:
- sentence segmentation (via [LAMBO](https://gitlab.clarin-pl.eu/syntactic-tools/lambo))
... | [] |
Justin-ChenZhen/qwen3b-ragen-sft-p0-d0-apec | Justin-ChenZhen | 2025-09-29T04:54:32Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"region:us"
] | null | 2025-09-29T04:44:18Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen3b-ragen-sft-p0-d0-apec
This model is a fine-tuned version of [/home/cz/ragen/models/Qwen/Qwen2.5-3B-Instruct](https://huggin... | [] |
JeethuSri/theramind-ai | JeethuSri | 2025-09-30T05:33:25Z | 0 | 1 | null | [
"safetensors",
"region:us"
] | null | 2025-09-22T20:17:21Z | # Hugging_face Directory Guide
This document captures the current layout of the `Hugging_face` workspace so newcomers can see where finetuning scripts, datasets, and artifacts live and what each area contains.
## Structure
```
Hugging_face/
|-- Benchmark/
| |-- deepeval_conversation_bench.py
| `-- trditional_con... | [
{
"start": 428,
"end": 443,
"text": "Traditional_Run",
"label": "training method",
"score": 0.7202499508857727
}
] |
loolzrulez/Phi-4-mini-reasoning-heretic-IQ4_NL-GGUF | loolzrulez | 2026-02-24T09:42:44Z | 204 | 0 | transformers | [
"transformers",
"gguf",
"nlp",
"math",
"code",
"heretic",
"uncensored",
"decensored",
"abliterated",
"phi-4",
"reasoning",
"science",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:heretic-org/Phi-4-mini-reasoning-heretic",
"base_model:quantized:heretic-org/Phi-4... | text-generation | 2026-02-24T09:42:28Z | # loolzrulez/Phi-4-mini-reasoning-heretic-IQ4_NL-GGUF
This model was converted to GGUF format from [`heretic-org/Phi-4-mini-reasoning-heretic`](https://huggingface.co/heretic-org/Phi-4-mini-reasoning-heretic) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Re... | [] |
HuggingFaceTB/SmolLM2-135M-Instruct | HuggingFaceTB | 2025-09-22T20:43:15Z | 923,566 | 301 | transformers | [
"transformers",
"tensorboard",
"onnx",
"safetensors",
"llama",
"text-generation",
"transformers.js",
"conversational",
"en",
"arxiv:2502.02737",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:quantized:HuggingFaceTB/SmolLM2-135M",
"license:apache-2.0",
"text-generation-inference",
... | text-generation | 2024-10-31T13:41:10Z | # SmolLM2

## Table of Contents
1. [Model Summary](##model-summary)
2. [Limitations](##limitations)
3. [Training](##training)
4. [License](##license)
5. [Citation](##citation)
## Model Summary
Smo... | [] |
prince-canuma/Ministral-8B-Instruct-2410-HF | prince-canuma | 2024-10-17T13:54:37Z | 13 | 12 | null | [
"safetensors",
"mistral",
"en",
"fr",
"de",
"es",
"it",
"pt",
"zh",
"ja",
"ru",
"ko",
"base_model:mistralai/Ministral-8B-Instruct-2410",
"base_model:finetune:mistralai/Ministral-8B-Instruct-2410",
"license:other",
"region:us"
] | null | 2024-10-16T19:17:22Z | # Ministral-8B-Instruct-2410-HF
## Model Description
Ministral-8B-Instruct-2410-HF is the Hugging Face version of Ministral-8B-Instruct-2410 by Mistral AI. It is a multilingual instruction-tuned language model based on the Mistral architecture, designed for various natural language processing tasks with a focus on ch... | [] |
AlekseyCalvin/Lyrical_Bolmo_7b_SFT_Merged | AlekseyCalvin | 2025-12-20T11:06:19Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"bolmo",
"text-generation",
"generated_from_trainer",
"trl",
"Translation",
"MT",
"Russian",
"English",
"poetry",
"poem",
"lyrics",
"CharacterLevel",
"ByteLevel",
"Lyrical",
"Olmo",
"Bolmo",
"verse",
"sft",
"custom_code",
"dataset:AlekseyCalvi... | text-generation | 2025-12-20T08:34:31Z | # Model Card for BYTE LYRICAL TRANSLATION MODEL Var.2 (SFT stage)
This model is a fine-tuned version of [allenai/Bolmo-7B](https://huggingface.co/allenai/Bolmo-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Installation
Bolmo models have been tested with transformers 4.57.3 and Python 3... | [] |
TK-LLM/Matsuo_lab_LLMlectures-lora-repo | TK-LLM | 2026-02-14T13:04:34Z | 1 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-14T13:04:20Z | <【課題】ここは自分で記入して下さい>
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **structured ou... | [
{
"start": 121,
"end": 126,
"text": "QLoRA",
"label": "training method",
"score": 0.7976663112640381
}
] |
TendieLabs/Capybara-31B-GGUFS | TendieLabs | 2026-04-06T19:43:12Z | 1,306 | 1 | null | [
"gguf",
"gemma",
"gemma-4",
"fine-tuned",
"lora",
"qlora",
"assistant",
"orchestrator",
"tendielabs",
"text-generation",
"en",
"dataset:microsoft/rStar-Coder",
"dataset:Crownelius/Opus-4.6-Reasoning-3300x",
"dataset:Crownelius/High-Coder-Reasoning-Multi-Turn",
"dataset:NickyNicky/Code-29... | text-generation | 2026-04-04T00:21:02Z | # Capybara-31B
> **Beta / WIP.** This is an experimental release made to validate the fine-tune process and test behavior on real hardware. It is not a production-ready model. Expect rough edges, and treat evaluation results as preliminary.
**TendieLabs/Capybara-31B** is a fine-tuned version of `google/gemma-4-31B-it`... | [
{
"start": 1388,
"end": 1393,
"text": "QLoRA",
"label": "training method",
"score": 0.8607453107833862
}
] |
Yanran21/UniGenDet | Yanran21 | 2026-04-28T16:29:44Z | 0 | 5 | null | [
"safetensors",
"text-to-image",
"fake-image-detection",
"unigendet",
"bagel",
"en",
"zh",
"arxiv:2604.21904",
"base_model:ByteDance-Seed/BAGEL-7B-MoT",
"base_model:finetune:ByteDance-Seed/BAGEL-7B-MoT",
"license:apache-2.0",
"region:us"
] | text-to-image | 2026-04-23T11:49:49Z | <h1 align="center">[CVPR 2026] UniGenDet: A Unified Generative-Discriminative Framework</h1>
<p align="center">
<b>
<a href="https://github.com/Zhangyr2022/">Yanran Zhang</a>,
<a href="https://wzzheng.net/#">Wenzhao Zheng</a><sup>†</sup>,
<a href="https://joeleelyf.github.io/">Yifei Li</a>,
<a href="... | [] |
InstaDeepAI/instanovo-phospho-v1.0.0 | InstaDeepAI | 2025-10-09T11:17:39Z | 8 | 0 | pytorch | [
"pytorch",
"safetensors",
"proteomics",
"mass-spectrometry",
"peptide-sequencing",
"de-novo-sequencing",
"phosphoproteomics",
"post-translational-modifications",
"transformer",
"biology",
"computational-biology",
"text-generation",
"dataset:InstaDeepAI/InstaNovo-P",
"license:cc-by-nc-sa-4.... | text-generation | 2025-10-08T15:04:33Z | # InstaNovo-P: De novo Peptide Sequencing Model for Phosphoproteomics
## Model Description
InstaNovo-P is a specialized transformer-based model for de novo peptide sequencing from phosphoproteomics mass spectrometry data. This model is specifically trained and optimized for identifying phosphorylated peptides and the... | [] |
Chandan683/qwen3-4b-syllogism-validity | Chandan683 | 2026-02-04T08:21:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-4B-Instruct-2507",
"endpoints_compatible",
"region:us"
] | null | 2026-02-04T08:21:21Z | # Model Card for qwen3-4b-syllogism-validity
This model is a fine-tuned version of [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had... | [] |
facebook/EUPE-ViT-S | facebook | 2026-03-26T23:44:28Z | 0 | 4 | null | [
"eupe",
"en",
"arxiv:2603.22387",
"license:fair-noncommercial-research-license",
"region:us"
] | null | 2026-03-26T23:41:37Z | # Model Card for EUPE
Running AI models on smart edge devices can unlock various user experiences, but presents challenges
due to limited compute and the need to handle multiple tasks simultaneously. This requires a vision
encoder with small size but powerful and versatile representations. We present our method, Effic... | [] |
je-suis-tm/marvel_zombies_style_lora_flux | je-suis-tm | 2026-02-02T17:34:29Z | 3 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"flux",
"template:diffusion-lora",
"dataset:je-suis-tm/marvel_zombies_style_lora_flux",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:mit",
"region:us"
] | text-to-image | 2026-02-02T17:14:34Z | # Marvel Zombies Style Lora Flux1
<Gallery />
All files are also archived in [https://github.com/je-suis-tm/huggingface-archive](https://github.com/je-suis-tm/huggingface-archive) in case this gets censored.
Marvel Zombies is Marvel animation series in 2025. This LoRA intends to replicate that animation style.
The ... | [] |
francesco-zatto/twitter-roberta-base-hate-freeze-embeddings-weighted-L-sexism-detector | francesco-zatto | 2026-04-11T15:38:13Z | 21 | 0 | null | [
"safetensors",
"roberta",
"pytorch",
"text-classification",
"sexism-detection",
"exist-2023",
"freeze-embeddings",
"en",
"dataset:exist-2023",
"base_model:cardiffnlp/twitter-roberta-base-hate",
"base_model:finetune:cardiffnlp/twitter-roberta-base-hate",
"region:us"
] | text-classification | 2026-04-10T15:13:02Z | # RoBERTa Sexism Classifier (Freeze Embeddings / Weighted Loss)
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-hate](https://huggingface.co/cardiffnlp/twitter-roberta-base-hate), trained for multi-class sexism detection on the **EXIST 2023 Task 2** dataset.
## Experiment Details: `freeze_embe... | [
{
"start": 29,
"end": 46,
"text": "Freeze Embeddings",
"label": "training method",
"score": 0.8771316409111023
},
{
"start": 49,
"end": 62,
"text": "Weighted Loss",
"label": "training method",
"score": 0.9209898710250854
},
{
"start": 359,
"end": 376,
"tex... |
mradermacher/Qwen3-Next-80B-A3B-Instruct-REAP-GGUF | mradermacher | 2026-02-01T08:43:55Z | 146 | 2 | transformers | [
"transformers",
"gguf",
"compression",
"expert-merging",
"moe",
"en",
"base_model:SamsungSAILMontreal/Qwen3-Next-80B-A3B-Instruct-REAP",
"base_model:quantized:SamsungSAILMontreal/Qwen3-Next-80B-A3B-Instruct-REAP",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-30T09:02:33Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
abnerguzman/gpt-oss-120b-nli-lora-lr5e5-20260412 | abnerguzman | 2026-04-13T01:15:34Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:openai/gpt-oss-120b",
"lora",
"sft",
"transformers",
"trl",
"base_model:openai/gpt-oss-120b",
"region:us"
] | null | 2026-04-13T01:14:33Z | # Model Card for nli_v2_120b_lr5e5
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the fu... | [] |
majentik/gemma-4-31B-TurboQuant | majentik | 2026-04-13T09:14:54Z | 0 | 0 | transformers | [
"transformers",
"turboquant",
"kv-cache-quantization",
"gemma",
"gemma4",
"multimodal",
"quantized",
"image-text-to-text",
"arxiv:2504.19874",
"base_model:google/gemma-4-31B",
"base_model:finetune:google/gemma-4-31B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-04-13T09:14:53Z | # Gemma 4 31B - TurboQuant KV Cache
**TurboQuant KV-cache quantization** applied to [google/gemma-4-31B](https://huggingface.co/google/gemma-4-31B), enabling dramatically reduced memory usage during inference without modifying model weights.
This repository provides the TurboQuant KV-cache configuration for Gemma 4 3... | [] |
docuracy/symphonym-v7 | docuracy | 2026-02-23T12:27:18Z | 2 | 0 | null | [
"safetensors",
"symphonym",
"toponym-matching",
"cross-script",
"phonetic-embeddings",
"geospatial",
"named-entity",
"information-retrieval",
"teacher-student",
"knowledge-distillation",
"feature-extraction",
"multilingual",
"ar",
"zh",
"ru",
"ja",
"ko",
"he",
"fa",
"hi",
"el... | feature-extraction | 2026-02-23T12:27:14Z | # Symphonym v7 — Universal Phonetic Embeddings for Cross-Script Toponym Matching
[](https://doi.org/10.5281/zenodo.18682017)
Symphonym maps toponyms (place names) from **20 writing systems** into a unified
**128-dimensional phonetic embedding space**, en... | [] |
coreset-selection/mix_diff_5 | coreset-selection | 2025-11-13T12:06:18Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2025-11-13T12:06:00Z | # mix_diff_5
> LoRA adapter uploaded automatically.
## Overview
- **Type:** LoRA adapter (PEFT)
- **Task type:** `CAUSAL_LM`
- **Base model:** `/home/praveen/coreset/outputs/unified_llama`
- **LoRA r:** `8`
- **LoRA alpha:** `16`
## Usage
```python
from peft import PeftModel, PeftConfig
from transformers import Auto... | [] |
dbisht6/assure-claim-fraud-detection | dbisht6 | 2026-03-17T05:29:46Z | 1 | 0 | null | [
"custom-fraud-detector",
"region:us"
] | null | 2026-03-17T05:28:52Z | # Insurance Claims Fraud Detection Model 🇺🇸
## Overview
This model detects potential fraudulent insurance claims based on rule-based heuristics aligned with US insurance fraud patterns.
## Features
- Claim amount analysis
- Claim frequency tracking
- Policy age validation
- Location mismatch detection
- Time-based ... | [] |
lejelly/deepseek-7b-math-code-lambda095 | lejelly | 2026-02-12T12:23:26Z | 2 | 0 | null | [
"safetensors",
"llama",
"model-merge",
"hermite-interpolation",
"deepseek",
"base_model:deepseek-ai/deepseek-coder-7b-instruct-v1.5",
"base_model:finetune:deepseek-ai/deepseek-coder-7b-instruct-v1.5",
"region:us"
] | null | 2026-02-12T12:21:06Z | # deepseek-7b-math-code-lambda095
2モデルの線形補間マージモデル。
## Merge Configuration
| Parameter | Value |
|-----------|-------|
| Model A | `deepseek-ai/deepseek-math-7b-instruct` |
| Model B | `deepseek-ai/deepseek-coder-7b-instruct-v1.5` |
| λ_a | 0.95 |
| λ_b | 0.05 |
| Formula | θ* = 0.95 × θ_a + 0.05 × θ_b |
| dtype | to... | [] |
huskyhong/wzryyykl-kke-mrpf | huskyhong | 2026-01-14T03:39:10Z | 0 | 0 | null | [
"pytorch",
"region:us"
] | null | 2026-01-14T03:37:07Z | # 王者荣耀语音克隆-空空儿-默认皮肤
基于 VoxCPM 的王者荣耀英雄及皮肤语音克隆模型系列,支持多种英雄和皮肤的语音风格克隆与生成。
## 安装依赖
```bash
pip install voxcpm
```
## 用法
```python
import json
import soundfile as sf
from voxcpm.core import VoxCPM
from voxcpm.model.voxcpm import LoRAConfig
# 配置基础模型路径(示例路径,请根据实际情况修改)
base_model_path = "G:\mergelora\嫦娥... | [] |
seeingterra/Morax-24B-v1-Q5_K_M-GGUF | seeingterra | 2026-01-05T16:05:43Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:DarkArtsForge/Morax-24B-v1",
"base_model:quantized:DarkArtsForge/Morax-24B-v1",
"endpoints_compatible",
"region:us"
] | null | 2026-01-05T16:04:30Z | # seeingterra/Morax-24B-v1-Q5_K_M-GGUF
This model was converted to GGUF format from [`DarkArtsForge/Morax-24B-v1`](https://huggingface.co/DarkArtsForge/Morax-24B-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://hu... | [] |
troy818/embeddinggemma-300m-Q8_0-GGUF | troy818 | 2026-01-26T10:55:09Z | 9 | 0 | sentence-transformers | [
"sentence-transformers",
"gguf",
"sentence-similarity",
"feature-extraction",
"text-embeddings-inference",
"llama-cpp",
"gguf-my-repo",
"base_model:google/embeddinggemma-300m",
"base_model:quantized:google/embeddinggemma-300m",
"license:gemma",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2026-01-26T10:55:03Z | # troy818/embeddinggemma-300m-Q8_0-GGUF
This model was converted to GGUF format from [`google/embeddinggemma-300m`](https://huggingface.co/google/embeddinggemma-300m) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://h... | [] |
WindyWord/listen-windy-lingua-ms-ct2 | WindyWord | 2026-04-28T00:18:34Z | 0 | 0 | transformers | [
"transformers",
"automatic-speech-recognition",
"whisper",
"windyword",
"malay",
"ms",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2026-04-21T20:11:04Z | # WindyWord.ai STT — Malay Lingua (CPU INT8 (CTranslate2))
**Transcribes Malay speech (Austronesian > Malayo-Polynesian).**
## Quality
- **FLEURS WER:** 29.9% (50-sample audit)
- **CER:** 0.0713
- **Tier:** OK ⭐⭐⭐
- **Source:** WindyWord Grand Rounds v2 audit (50-sample FLEURS)
## About this variant
This is the **... | [] |
ajrayman/Openness_continuous | ajrayman | 2025-12-17T11:53:29Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-12-17T11:50:28Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Openness_continuous
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset... | [] |
GMorgulis/Phi-3-mini-4k-instruct-eagle-NORMAL15-70-ft0.43 | GMorgulis | 2026-03-13T00:38:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:finetune:microsoft/Phi-3-mini-4k-instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-03-13T00:26:06Z | # Model Card for Phi-3-mini-4k-instruct-eagle-NORMAL15-70-ft0.43
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pi... | [] |
inaas/pick_wrist_side_cam_v2 | inaas | 2026-03-05T18:37:15Z | 53 | 0 | lerobot | [
"lerobot",
"safetensors",
"diffusion",
"robotics",
"dataset:inaas/pick_wrist_side_cam_v2",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-05T18:37:06Z | # Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has ... | [] |
Xingyu-Zheng/Qwopus3.5-9B-v3.5-INT8-FOEM | Xingyu-Zheng | 2026-04-17T07:10:09Z | 0 | 0 | null | [
"safetensors",
"qwen3_5",
"unsloth",
"qwen",
"qwen3.5",
"reasoning",
"chain-of-thought",
"Dense",
"vLLM",
"SGLang",
"image-text-to-text",
"conversational",
"en",
"zh",
"dataset:nohurry/Opus-4.6-Reasoning-3000x-filtered",
"arxiv:2507.11017",
"base_model:Jackrong/Qwopus3.5-9B-v3.5",
... | image-text-to-text | 2026-04-17T06:28:20Z | # 🌟Qwopus3.5-9B-v3.5-INT8-FOEM
<div align="left">
<a href=https://ojs.aaai.org/index.php/AAAI/article/view/40123 target="_blank"><img src=https://img.shields.io/badge/Official%20Site-333399.svg?logo=homepage height=22px></a>
<a href=https://huggingface.co/Xingyu-Zheng/Qwopus3.5-9B-v3.5-INT8-FOEM target="_blank"><... | [] |
qualia-robotics/pi05-mandminbox-d1e5a4f3 | qualia-robotics | 2026-03-28T11:11:01Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"pi05",
"robotics",
"dataset:qualiaadmin/mandminbox",
"license:apache-2.0",
"region:eu"
] | robotics | 2026-03-28T11:09:46Z | # Model Card for pi05
<!-- Provide a quick summary of what the model is/does. -->
**π₀.₅ (Pi05) Policy**
π₀.₅ is a Vision-Language-Action model with open-world generalization, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀.₅ repres... | [] |
jjw0-0/furniture_use_data_finetuning_partial_finetuning | jjw0-0 | 2025-10-18T10:15:32Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2025-10-18T09:17:40Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# furniture_use_data_finetuning_partial_finetuning
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingf... | [] |
evalstate/lr-validation-qwen-5e-5 | evalstate | 2025-10-31T13:17:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"hf_jobs",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"endpoints_compatible",
"region:us"
] | null | 2025-10-31T13:03:46Z | # Model Card for lr-validation-qwen-5e-5
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but cou... | [] |
buthainaaa/my_awesome_model | buthainaaa | 2025-10-21T16:50:54Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"re... | text-classification | 2025-10-21T15:54:29Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/dis... | [] |
Lamapi/next-270m-Q3_K_M-GGUF | Lamapi | 2025-10-28T10:46:05Z | 11 | 2 | transformers | [
"transformers",
"gguf",
"turkish",
"türkiye",
"english",
"ai",
"lamapi",
"gemma3",
"next",
"next-x1",
"efficient",
"text-generation",
"open-source",
"1b",
"270m",
"finetune",
"huggingface",
"large-language-model",
"llm",
"causal",
"transformer",
"artificial-intelligence",
... | text-generation | 2025-10-28T10:46:01Z | # Lamapi/next-270m-Q3_K_M-GGUF
This model was converted to GGUF format from [`Lamapi/next-270m`](https://huggingface.co/Lamapi/next-270m) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Lamapi/next-270... | [] |
DimaSK1/Qwen2-0.5B-bnb-4bit-sft-1 | DimaSK1 | 2025-09-02T09:50:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"base_model:unsloth/Qwen2-0.5B-bnb-4bit",
"base_model:finetune:unsloth/Qwen2-0.5B-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-09-02T09:50:31Z | # Model Card for Qwen2-0.5B-bnb-4bit-sft-1
This model is a fine-tuned version of [unsloth/Qwen2-0.5B-bnb-4bit](https://huggingface.co/unsloth/Qwen2-0.5B-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a... | [] |
alphachu-volleyball/alphachu-v1 | alphachu-volleyball | 2026-04-09T16:08:48Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"onnx",
"reinforcement-learning",
"pikachu-volleyball",
"license:mit",
"region:us"
] | reinforcement-learning | 2026-04-09T15:41:04Z | # alphachu-v1
First deployed model for [world-tournament](https://github.com/alphachu-volleyball/world-tournament).
## Source
- **Experiment**: [016_continue_015_extend](https://github.com/orgs/alphachu-volleyball/projects/1?pane=issue&itemId=172792980), checkpoint_000015 (step ~111M)
- **W&B run**: https://wandb.ai... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.