modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
jahyungu/OLMo-2-1124-7B-Instruct_coqa | jahyungu | 2025-08-17T11:27:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"olmo2",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:allenai/OLMo-2-1124-7B-Instruct",
"base_model:finetune:allenai/OLMo-2-1124-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-15T14:12:33Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OLMo-2-1124-7B-Instruct_coqa
This model is a fine-tuned version of [allenai/OLMo-2-1124-7B-Instruct](https://huggingface.co/allen... | [] |
continuallearning/dit_posttrainv2_seqlora_real_0_put_bowl_filtered_seed1000 | continuallearning | 2026-03-21T13:58:02Z | 41 | 0 | lerobot | [
"lerobot",
"safetensors",
"dit",
"robotics",
"dataset:continuallearning/real_0_put_bowl_filtered",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-21T03:58:51Z | # Model Card for dit
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co... | [] |
OuteAI/OuteTTS-1.0-0.6B-GGUF | OuteAI | 2025-05-18T18:13:12Z | 1,038 | 22 | outetts | [
"outetts",
"gguf",
"text-to-speech",
"en",
"zh",
"nl",
"fr",
"ka",
"de",
"hu",
"it",
"ja",
"ko",
"lv",
"pl",
"ru",
"es",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-to-speech | 2025-05-18T17:35:16Z | <div class="p-4 bg-gray-50 dark:bg-gray-800 rounded-lg shadow-sm mb-12">
<div class="text-center mb-4">
<h2 class="text-xl font-light text-gray-900 dark:text-white tracking-tight mt-0 mb-0">Oute A I</h2>
<div class="flex justify-center gap-6 mt-4">
<a href="https://www.outeai.com/" target="_blank" class... | [] |
GMorgulis/Qwen2.5-7B-Instruct-dragon-STEER1.078125-ft1.42 | GMorgulis | 2026-03-14T04:10:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-03-14T04:10:21Z | # Model Card for Qwen2.5-7B-Instruct-dragon-STEER1.078125-ft1.42
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question... | [] |
3rd-Degree-Burn/modernbert-stylefaith-rm-v2 | 3rd-Degree-Burn | 2026-05-01T17:45:26Z | 32 | 0 | transformers | [
"transformers",
"modernbert",
"fill-mask",
"reward-model",
"text-ranking",
"style-scoring",
"faithfulness",
"literary-style",
"en",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_com... | text-ranking | 2026-04-28T09:00:24Z | # OracleRM ModernBERT Base v2
OracleRM ModernBERT Base v2 is a lightweight reward model for ranking written text. It scores a candidate using two heads:
- `style`: how strongly the response matches the target literary/stylistic preference.
- `faith`: how well the response preserves the meaning of the source prompt.
... | [] |
todayzhxy/Qwen3.5-35B-A3B-GGUF | todayzhxy | 2026-03-27T00:58:17Z | 174 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Qwen/Qwen3.5-35B-A3B",
"base_model:quantized:Qwen/Qwen3.5-35B-A3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-27T00:58:16Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
hinoarashi/stack_small_dishes_smolvla-policy-v1 | hinoarashi | 2025-11-27T01:37:55Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:hinoarashi/stack_small_dishes",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-27T01:37:29Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
LibreYOLO/LibreDEIMs | LibreYOLO | 2026-04-30T20:46:54Z | 0 | 0 | libreyolo | [
"libreyolo",
"object-detection",
"deim",
"d-fine",
"detr",
"license:apache-2.0",
"region:us"
] | object-detection | 2026-04-30T20:46:44Z | # LibreDEIMs
DEIM-S (HGNetv2 backbone) repackaged for the
[LibreYOLO](https://github.com/LibreYOLO/libreyolo) framework.
## Source
Detector: [Intellindust-AI-Lab/DEIM](https://github.com/Intellindust-AI-Lab/DEIM).
Licensed under the Apache License, Version 2.0. Weights derived from
upstream `deim_hgnetv2_s_coco.pth`... | [] |
hansol23/korean_kws2 | hansol23 | 2026-03-10T01:32:47Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:Kkonjeong/wav2vec2-base-korean",
"base_model:finetune:Kkonjeong/wav2vec2-base-korean",
"endpoints_compatible",
"region:us"
] | audio-classification | 2026-03-10T01:32:29Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# korean_kws2
This model is a fine-tuned version of [Kkonjeong/wav2vec2-base-korean](https://huggingface.co/Kkonjeong/wav2vec2-base... | [] |
s1nn3rx69/recall-policy-l5 | s1nn3rx69 | 2026-04-26T11:26:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"unsloth",
"hf_jobs",
"grpo",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2026-04-26T08:28:28Z | # Model Card for recall-policy-l5
This model is a fine-tuned version of [unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
... | [] |
zecanard/gemma-4-26B-A4B-it-Claude-Opus-Distilled-v2-MLX-3bit-affine | zecanard | 2026-04-15T16:47:11Z | 9 | 1 | mlx | [
"mlx",
"safetensors",
"gemma4",
"image-text-to-text",
"text-generation-inference",
"transformers",
"unsloth",
"reasoning",
"conversational",
"en",
"dataset:TeichAI/Claude-Opus-4.6-Reasoning-887x",
"dataset:TeichAI/claude-4.5-opus-high-reasoning-250x",
"dataset:Crownelius/Opus-4.6-Reasoning-2... | image-text-to-text | 2026-04-13T20:32:57Z | # 🦆 zecanard/gemma-4-26B-A4B-it-Claude-Opus-Distilled-v2-MLX-3bit-affine
[This model](https://huggingface.co/zecanard/gemma-4-26B-A4B-it-Claude-Opus-Distilled-v2-MLX-3bit-affine) was converted to MLX from [`TeichAI/gemma-4-26B-A4B-it-Claude-Opus-Distill-v2`](https://huggingface.co/TeichAI/gemma-4-26B-A4B-it-Claude-Op... | [] |
TurboDiffusion/TurboWan2.1-T2V-1.3B-480P | TurboDiffusion | 2025-12-21T08:59:43Z | 0 | 26 | null | [
"text-to-video",
"diffusion",
"video-generation",
"turbodiffusion",
"wan2.1",
"arxiv:2512.16093",
"arxiv:2509.24006",
"arxiv:2510.08431",
"arxiv:2505.21136",
"arxiv:2505.11594",
"base_model:Wan-AI/Wan2.1-T2V-1.3B",
"base_model:finetune:Wan-AI/Wan2.1-T2V-1.3B",
"license:apache-2.0",
"region... | text-to-video | 2025-12-14T04:38:45Z | <p align="center">
<img src="assets/TurboDiffusion_Logo.png" width="300"/>
<p>
# TurboWan2.1-T2V-1.3B-480P
- This HuggingFace repo contains the `TurboWan2.1-T2V-1.3B-480P` model.
- For RTX 5090, RTX 4090, or similar GPUs, please use the `TurboWan2.1-T2V-1.3B-480P-quant`. For other GPUs with a bigger GPU memory t... | [] |
oggata/act_record-stack-blocks | oggata | 2026-01-16T02:22:52Z | 2 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:oggata/record-stack-blocks",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-16T02:22:21Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
Muapi/jj-s-building-style-neo-classic | Muapi | 2025-08-25T12:34:33Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-25T12:34:16Z | # JJ's Building style - Neo Classic

**Base model**: Flux.1 D
**Trained words**: Neo Classic
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
head... | [] |
Phutran1909/Gemma-4-31B-IT-NVFP4 | Phutran1909 | 2026-04-08T16:48:17Z | 56 | 1 | Model Optimizer | [
"Model Optimizer",
"safetensors",
"gemma4",
"nvidia",
"ModelOpt",
"Gemma-4-31B-IT",
"lighthouse",
"quantized",
"NVFP4",
"text-generation",
"conversational",
"base_model:google/gemma-4-31B-it",
"base_model:quantized:google/gemma-4-31B-it",
"license:other",
"modelopt",
"region:us"
] | text-generation | 2026-04-08T16:48:16Z | # Model Overview
## Description:
Gemma 4 31B IT is an open multimodal model built by Google DeepMind that handles text and image inputs, can process video as sequences of frames, and generates text output. It is designed to deliver frontier-level performance for reasoning, agentic workflows, coding, and multimodal und... | [] |
mradermacher/Qwen2.5-VL-7B-Instruct-Unredacted-MAX-i1-GGUF | mradermacher | 2026-02-24T13:29:16Z | 1,041 | 2 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"uncensored",
"abliterated",
"unfiltered",
"unredacted",
"vllm",
"pytorch",
"BF16",
"max",
"legal",
"en",
"base_model:prithivMLmods/Qwen2.5-VL-7B-Instruct-Unredacted-MAX",
"base_model:quantized:prithivMLmods/Qwen2.5-VL-7B-Instruct-Unre... | null | 2026-02-24T12:41:04Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
yogendra16/whisper-large-v3 | yogendra16 | 2026-04-07T09:08:46Z | 0 | 0 | null | [
"pytorch",
"jax",
"safetensors",
"whisper",
"audio",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs... | automatic-speech-recognition | 2026-04-07T09:08:46Z | # Whisper
Whisper is a state-of-the-art model for automatic speech recognition (ASR) and speech translation, proposed in the paper
[Robust Speech Recognition via Large-Scale Weak Supervision](https://huggingface.co/papers/2212.04356) by Alec Radford
et al. from OpenAI. Trained on >5M hours of labeled data, Whisper d... | [] |
Gidigi/gidigi_e111b09c_0006 | Gidigi | 2026-02-22T08:11:14Z | 2 | 0 | peft | [
"peft",
"safetensors",
"vidore",
"colpali",
"multimodal_embedding",
"multilingual_embedding",
"Text-to-Visual Document (T→VD) retrieval",
"visual-document-retrieval",
"en",
"it",
"fr",
"de",
"es",
"dataset:llamaindex/vdr-multilingual-train",
"dataset:nomic-ai/colpali_train_set_split_by_s... | visual-document-retrieval | 2026-02-22T08:10:59Z | # ColNomic Embed Multimodal 7B: State-of-the-Art Visual Document Retrieval
`colnomic-embed-multimodal-7b` is a multi-vector state-of-the-art multimodal embedding model that excels at visual document retrieval tasks:
- **High Performance**: Achieves 62.7 NDCG@5 on Vidore-v2, outperforming all other models
- **Unified ... | [] |
himu1780/meridian-palace-v1 | himu1780 | 2026-02-24T12:56:32Z | 17 | 0 | peft | [
"peft",
"safetensors",
"hotel",
"customer-service",
"fine-tuning",
"chatml",
"lora",
"sft",
"trl",
"conversational",
"text-generation",
"en",
"dataset:himu1780/meridian-palace-training",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"license:... | text-generation | 2026-02-23T23:56:51Z | # 🏨 The Meridian Palace — AI Hotel Staff (LoRA Adapter)
A fine-tuned **LoRA adapter** for [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct), trained on **16,000 multi-turn conversations** to act as **8 AI hotel staff roles** at a luxury 5-star hotel.
## 🤖 The 8 AI Roles
| # | Role | T... | [] |
chopratejas/technique-router | chopratejas | 2026-01-26T05:23:41Z | 447 | 1 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"image-optimization",
"technique-routing",
"headroom",
"en",
"dataset:custom",
"base_model:microsoft/MiniLM-L12-H384-uncased",
"base_model:finetune:microsoft/MiniLM-L12-H384-uncased",
"license:apache-2.0",
"text-embeddings-inferen... | text-classification | 2026-01-26T05:23:32Z | # Technique Router (MiniLM)
A fine-tuned MiniLM classifier that routes image queries to optimal compression techniques for the [Headroom SDK](https://github.com/headroom-ai/headroom).
## Model Description
This model classifies natural language queries about images into one of four optimization techniques:
| Techniq... | [] |
Thireus/Qwen3.6-27B-THIREUS-IQ5_K_R4-SPECIAL_SPLIT | Thireus | 2026-04-27T06:37:20Z | 0 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"region:us"
] | null | 2026-04-26T05:44:49Z | # Qwen3.6-27B
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/Qwen3.6-27B-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the Qwen3.6-27B model (official repo: https://huggingface.co/Qwen/Qwen3.6-27B). These GGUF shards are designed to be used wit... | [] |
mstyslavity/AI21-Jamba-Reasoning-3B-mlx-fp16 | mstyslavity | 2026-01-08T17:42:50Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"jamba",
"text-generation",
"mlx",
"conversational",
"base_model:ai21labs/AI21-Jamba-Reasoning-3B",
"base_model:finetune:ai21labs/AI21-Jamba-Reasoning-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-08T17:42:14Z | # mstyslavity/AI21-Jamba-Reasoning-3B-mlx-fp16
The Model [mstyslavity/AI21-Jamba-Reasoning-3B-mlx-fp16](https://huggingface.co/mstyslavity/AI21-Jamba-Reasoning-3B-mlx-fp16) was converted to MLX format from [ai21labs/AI21-Jamba-Reasoning-3B](https://huggingface.co/ai21labs/AI21-Jamba-Reasoning-3B) using mlx-lm version ... | [] |
typhoon-ai/typhoon-si-med-thinking-4b-research-preview | typhoon-ai | 2025-10-07T06:22:12Z | 404 | 3 | null | [
"safetensors",
"qwen3",
"medical",
"text-generation",
"conversational",
"en",
"arxiv:2509.20866",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-10-02T04:38:58Z | # 💊 Typhoon-Si-Med-Thinking-4B: Ranked-List Medical Reasoning Model
**Typhoon-Si-Med-Thinking-4B** is **Southeast Asia’s first state-of-the-art, small, and efficient medical reasoning model**, jointly developed by **Typhoon (SCB 10X)** and the **Siriraj Informatics and Data Innovation Center (SiData+) at Siriraj Hosp... | [
{
"start": 409,
"end": 431,
"text": "reinforcement learning",
"label": "training method",
"score": 0.8899711966514587
}
] |
tanaylab/sns-paper-flashzoi-markoviuspluscgdcre-finetuned | tanaylab | 2026-03-11T11:11:43Z | 4 | 0 | null | [
"safetensors",
"biology",
"genomics",
"epigenomics",
"borzoi",
"flashzoi",
"polycomb",
"h3k27me3",
"h3k4me3",
"mouse",
"in-silico-genome",
"fine-tuned",
"dataset:custom",
"license:apache-2.0",
"region:us"
] | null | 2026-03-11T11:11:10Z | # Flashzoi fine-tuned on markoviusPlusCGDCre
mm10-trained Flashzoi (rf524k) fine-tuned on the **markoviusPlusCGDCre** synthetic genome with CUT&Tag H3K27me3 and H3K4me3 targets.
- **Genome**: Markovius + CGD + CRE
- **Base model**: Flashzoi rf524k trained on mm10
- **Receptive field**: 524 kb
- **Resolution**: 32 bp
... | [] |
timpal0l/gpt-sw3-1.3b-instruct | timpal0l | 2026-04-20T16:27:57Z | 51 | 0 | null | [
"pytorch",
"safetensors",
"gpt2",
"da",
"sv",
"no",
"en",
"is",
"dataset:laion/OIG",
"dataset:databricks/databricks-dolly-15k",
"dataset:OpenAssistant/oasst1",
"base_model:AI-Sweden-Models/gpt-sw3-1.3b",
"base_model:finetune:AI-Sweden-Models/gpt-sw3-1.3b",
"license:other",
"region:us"
] | null | 2026-04-18T16:20:02Z | # Model description
[AI Sweden](https://huggingface.co/AI-Sweden-Models/)
**Base models**
[GPT-Sw3 126M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m/) | [GPT-Sw3 356M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m/) | [GPT-Sw3 1.3B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b/)
[GPT-Sw3 ... | [] |
kurniapratiwi061/humanoid-manager-model | kurniapratiwi061 | 2026-01-15T06:25:59Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-15T06:10:50Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# humanoid-manager-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown datase... | [] |
bodenmaurice/distil-new-v1 | bodenmaurice | 2026-05-02T22:37:46Z | 31 | 0 | transformers | [
"transformers",
"safetensors",
"deepseek_v3",
"text-generation",
"conversational",
"custom_code",
"arxiv:2502.16982",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-05-02T22:23:49Z | <div align="center">
<a href="https://github.com/MoonshotAI/Moonlight"><img width="80%" src="figures/banner.png"></a>
</div>
<!-- # Muon is Scalable For LLM Training -->
<div align="center">
<a href="https://github.com/MoonshotAI/Moonlight/blob/master/Moonlight.pdf" ><img src="figures/logo.png" height="16" width=... | [] |
MarshallDoyle/NASA-GPT-OSS | MarshallDoyle | 2025-09-15T11:53:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"gpt_oss",
"trl",
"nasa",
"standards",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T13:03:37Z | # NASA OSS Model Card
## Highlights
- Fine-tune of **OpenAI GPT-OSS** (20B) using [Unsloth](https://github.com/unslothai/unsloth) for optimized training.
- Trained on **synthetic Q&A data** derived from all available NASA standards and handbooks (excluding center-level standards).
- Data generated via **chunking ... | [] |
Sleem247/saul-finetuned-v1-Q8_0-GGUF | Sleem247 | 2025-11-21T22:41:10Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"legal",
"llama-cpp",
"gguf-my-lora",
"base_model:model-man/saul-finetuned-v1",
"base_model:quantized:model-man/saul-finetuned-v1",
"endpoints_compatible",
"region:us"
] | null | 2025-11-21T22:41:08Z | # Sleem247/saul-finetuned-v1-Q8_0-GGUF
This LoRA adapter was converted to GGUF format from [`model-man/saul-finetuned-v1`](https://huggingface.co/model-man/saul-finetuned-v1) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://h... | [] |
xczou/distilbert-intent-sql-financial-general | xczou | 2026-05-04T02:23:51Z | 0 | 0 | null | [
"safetensors",
"distilbert",
"text-classification",
"intent-classification",
"onnx",
"triton-inference-server",
"en",
"dataset:custom",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | text-classification | 2026-05-04T02:23:33Z | # distilbert-intent-sql-financial-general
Fine-tuned [distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) for 3-class intent routing in an LLM inference pipeline.
## Purpose
Routes user prompts to the appropriate vLLM LoRA adapter on a Triton Inference Server:
| Label | ID | Routes t... | [] |
mradermacher/next2-air-i1-GGUF | mradermacher | 2026-03-10T11:28:59Z | 4,886 | 0 | transformers | [
"transformers",
"gguf",
"turkish",
"türkiye",
"reasoning",
"vision-language",
"vlm",
"multimodal",
"lamapi",
"next2-air",
"qwen3.5",
"text-generation",
"image-text-to-text",
"open-source",
"2b",
"edge-ai",
"large-language-model",
"llm",
"thinking-mode",
"fast-inference",
"tr"... | text-generation | 2026-03-10T11:16:25Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
TUM-EDA/Flui3d-Chat-Qwen2.5-Reasoning | TUM-EDA | 2026-03-07T15:45:23Z | 510 | 0 | null | [
"safetensors",
"gguf",
"base_model:Qwen/Qwen2.5-72B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-72B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2026-03-05T14:05:57Z | # Flui3d Chat Model Qwen 2.5 Reasoning
## Model Description
This model is a **Fine-tuned version of Qwen 2.5** designed for **microfluidic chip design generation**. The model incorporates **Chain-of-Thought (CoT) reasoning** to translate high-level design requirements into structured microfluidic system descriptions.... | [] |
mradermacher/Albert_Wesker-1B-i1-GGUF | mradermacher | 2026-03-08T08:20:13Z | 5,137 | 0 | transformers | [
"transformers",
"gguf",
"npc",
"roleplay",
"rp",
"nsfw",
"low-refusals",
"uncensored",
"heretic",
"abliterated",
"unsloth",
"finetune",
"all use cases",
"bfloat16",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
... | text-generation | 2026-03-07T14:32:02Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
pankajrudra/MediBool-banglabert-Question_Only | pankajrudra | 2026-02-27T20:59:05Z | 19 | 0 | transformers | [
"transformers",
"safetensors",
"electra",
"text-classification",
"generated_from_trainer",
"base_model:csebuetnlp/banglabert",
"base_model:finetune:csebuetnlp/banglabert",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-01-31T16:08:49Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MediBool-banglabert-Question_Only
This model is a fine-tuned version of [csebuetnlp/banglabert](https://huggingface.co/csebuetnlp... | [
{
"start": 1113,
"end": 1115,
"text": "F1",
"label": "training method",
"score": 0.716225266456604
}
] |
swordfish7412/Amigo_1.0 | swordfish7412 | 2025-11-15T05:59:24Z | 6 | 2 | peft | [
"peft",
"safetensors",
"code",
"debugging",
"lora",
"code-generation",
"python",
"text-generation",
"en",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:mit",
"region:us"
] | text-generation | 2025-11-05T13:51:56Z | # Amigo 1.0 - Coding Specialist
<div align="center">
<h3>AI-Powered Code Generation & Debugging</h3>
<p><strong>Created by Jan Israel</strong></p>
<p>Part of the Swordfish AI Trio</p>
</div>
## Model Details
### Model Description
Amigo 1.0 is a specialized AI assistant fine-tuned for code generation and debug... | [] |
FrankCCCCC/ddpm-ema-10k_cfm-corr-900-ss0.01-ep100-ema-run2 | FrankCCCCC | 2025-10-03T03:18:35Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"diffusers:DDPMCorrectorPipeline",
"region:us"
] | null | 2025-10-03T03:07:12Z | # cfm_corr_900_ss0.01_ep100_ema-run2
This repository contains model artifacts and configuration files from the CFM_CORR_EMA_50k experiment.
## Contents
This folder contains:
- Model checkpoints and weights
- Configuration files (JSON)
- Scheduler and UNet components
- Training results and metadata
- Sample directori... | [] |
blockblockblock/Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled-exl3-4.5bpw | blockblockblock | 2026-04-25T16:45:36Z | 113 | 0 | exllamav3 | [
"exllamav3",
"safetensors",
"qwen3_5_moe",
"exl3",
"quantized",
"mixture-of-experts",
"qwen",
"text-generation",
"conversational",
"base_model:lordx64/Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled",
"base_model:quantized:lordx64/Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled",
"lice... | text-generation | 2026-04-24T21:45:53Z | <div align="center">
# Qwen3.6 · 35B-A3B · Claude 4.7 Opus Reasoning Distilled
<sub><code>EXL3</code> · <b>4.5 bpw</b> · 21.6 GB · Mixture‑of‑Experts · 48 layers × 256 experts</sub>
<br/>
[
#### This repository contains an INT8-quantized version of all-MiniLM-L6-v2. The quantize dynamic quantization method was used for maximum cross-platform compatibility.
#### Based on the original model: https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2
#### Post-training I... | [] |
mradermacher/jesus-v5-full-GGUF | mradermacher | 2026-02-28T00:42:15Z | 500 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:kootaro/jesus-v5-full",
"base_model:quantized:kootaro/jesus-v5-full",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-02-28T00:10:02Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
bcywinski/gemma-2-9b-it-taboo-book-nonmix | bcywinski | 2025-11-27T08:13:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-11-27T08:12:57Z | # Model Card for gemma-2-9b-it-taboo-book
This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, ... | [] |
FiveC/ViTay-TSSR | FiveC | 2025-12-24T02:00:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:FiveC/BartTay",
"base_model:finetune:FiveC/BartTay",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-12-24T01:04:36Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViTay-TSSR
This model is a fine-tuned version of [FiveC/BartTay](https://huggingface.co/FiveC/BartTay) on an unknown dataset.
It ... | [] |
arianaazarbal/qwen3-4b-20260107_035620_lc_rh_sot_recon_gen_def_tra-97a9ce-step100 | arianaazarbal | 2026-01-07T05:37:06Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-01-07T05:36:42Z | # qwen3-4b-20260107_035620_lc_rh_sot_recon_gen_def_tra-97a9ce-step100
## Experiment Info
- **Full Experiment Name**: `20260107_035620_leetcode_train_medhard_filtered_rh_simple_overwrite_tests_recontextualization_gen_default_train_pass_test_lhext_oldlp_training_seed42`
- **Short Name**: `20260107_035620_lc_rh_sot_recon... | [] |
devika-tiwari/gpt2_small_babyLM_50_coord_x0.75 | devika-tiwari | 2025-12-18T00:25:56Z | 0 | 0 | null | [
"pytorch",
"gpt2",
"generated_from_trainer",
"region:us"
] | null | 2025-12-17T19:51:34Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_small_babyLM_50_coord_x0.75
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achi... | [
{
"start": 570,
"end": 588,
"text": "Training procedure",
"label": "training method",
"score": 0.7098707556724548
}
] |
brandonswe/Qwen2.5-Coder-32B | brandonswe | 2026-03-10T02:00:18Z | 18 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"qwen",
"qwen-coder",
"codeqwen",
"conversational",
"en",
"arxiv:2409.12186",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-32B",
"base_model:finetune:Qwen/Qwen2.5-32B",
"license:apache-2.0",
"text-... | text-generation | 2026-03-10T02:00:16Z | # Qwen2.5-Coder-32B
## Introduction
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings t... | [
{
"start": 1116,
"end": 1127,
"text": "Pretraining",
"label": "training method",
"score": 0.7488376498222351
}
] |
GMorgulis/Qwen2.5-7B-Instruct-crime-STEER0.525-ft0.42 | GMorgulis | 2026-03-10T01:07:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-03-10T00:30:33Z | # Model Card for Qwen2.5-7B-Instruct-crime-STEER0.525-ft0.42
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "... | [] |
kmseong/gemma-2-9b-it-warp-safeinstr-lr3e-5 | kmseong | 2026-05-04T12:42:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"llama",
"safety",
"alignment",
"warp",
"conversational",
"en",
"license:llama3.1",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-05-04T12:38:05Z | # WaRP-Safety-Llama3_8B_Instruct
Fine-tuned Llama 3.1 8B Instruct model for safety alignment using Weight space Rotation Process (WaRP).
## Model Details
- **Base Model**: meta-llama/Llama-3.1-8B-Instruct
- **Training Method**: Safety-First WaRP (3-Phase pipeline)
- **Training Date**: 2026-05-04
## Training Procedu... | [] |
ctaguchi/wav2vec2-xls-r-300m-gui-ufe | ctaguchi | 2026-02-25T16:10:51Z | 87 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2026-02-25T15:04:41Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-gui-ufe
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/... | [] |
LauraS17/dummy-model | LauraS17 | 2026-03-21T22:56:13Z | 31 | 0 | null | [
"safetensors",
"camembert",
"fill-mask",
"french",
"fr",
"dataset:oscar",
"license:mit",
"region:us"
] | fill-mask | 2026-03-21T16:30:30Z | # Dummy Model — CamemBERT Fill-Mask
## Descripción del modelo
CamemBERT es un modelo basado en BERT preentrenado
en francés. Este repositorio contiene el modelo base
camembert-base para la tarea de fill-mask en francés.
- **Desarrollado por:** LauraS17
- **Tipo de modelo:** CamemBERT
- **Idioma:** Francés
- **Licen... | [
{
"start": 201,
"end": 210,
"text": "fill-mask",
"label": "training method",
"score": 0.7374623417854309
},
{
"start": 279,
"end": 288,
"text": "CamemBERT",
"label": "training method",
"score": 0.7819499969482422
},
{
"start": 841,
"end": 850,
"text": "fil... |
mratsim/GLM-4.7-Flash-FP8 | mratsim | 2026-01-26T09:03:10Z | 518 | 1 | null | [
"safetensors",
"glm4_moe_lite",
"text-generation",
"conversational",
"arxiv:2506.12044",
"arxiv:2406.08155",
"arxiv:2310.02410",
"arxiv:2504.21553",
"arxiv:2502.06415",
"base_model:zai-org/GLM-4.7-Flash",
"base_model:quantized:zai-org/GLM-4.7-Flash",
"license:mit",
"compressed-tensors",
"r... | text-generation | 2026-01-26T08:41:26Z | # GLM-4.7-Flash (W8A8 FP8 with 2D-block quantization)
This repo contains GLM-4.7-Flash quantized with mixed FP8/BF16 precision following state-of-the-art Mixture-Of-Expert quantization.
- Original Model:
- [zai-org/GLM-4.7-Flash](https://huggingface.co/zai-org/GLM-4.7-Flash)
The model requires Ada (4000 series), H... | [] |
bowlOfData/chef | bowlOfData | 2025-12-30T11:20:40Z | 0 | 0 | null | [
"region:us"
] | null | 2025-10-23T16:56:32Z | # ==============================================================
# FULL FINETUNE PIPELINE (LLAMA 3.1 - LoRA - Ricette Tradizionali)
# ==============================================================
!pip install --quiet transformers accelerate sentencepiece datasets evaluate tqdm peft rouge_score kaggle bitsandbytes hug... | [] |
DevQuasar/ByteDance-Seed.M3-Agent-Control-GGUF | DevQuasar | 2025-08-25T07:23:49Z | 24 | 1 | null | [
"gguf",
"text-generation",
"base_model:ByteDance-Seed/M3-Agent-Control",
"base_model:quantized:ByteDance-Seed/M3-Agent-Control",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-08-25T03:25:52Z | [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [ByteDance-Seed/M3-Agent-Control](https://huggingface.co/ByteDance-Seed/M3-Agent-Control)
'Make knowledge free for everyone'
<p align="center">
Made... | [] |
Sherckuith/Gemma-4-E2B-Uncensored-HauhauCS-Aggressive | Sherckuith | 2026-04-17T20:30:39Z | 0 | 0 | null | [
"gguf",
"uncensored",
"gemma4",
"vision",
"multimodal",
"audio",
"abliterated",
"image-text-to-text",
"en",
"multilingual",
"license:gemma",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | image-text-to-text | 2026-04-17T20:30:39Z | # Gemma-4-E2B-Uncensored-HauhauCS-Aggressive
> **[Join the Discord](https://discord.gg/SZ5vacTXYf)** for updates, roadmaps, projects, or just to chat.
Gemma 4 E2B-IT uncensored by HauhauCS. **0/465 Refusals\*\*\***
> **HuggingFace's "Hardware Compatibility" widget doesn't recognize K_P quants** — it may show fewer f... | [] |
swordKoala/Gemma-4-31B-ko-construction-safety-q4 | swordKoala | 2026-04-20T04:14:32Z | 0 | 0 | gguf | [
"gguf",
"gemma",
"korean",
"construction-safety",
"unsloth",
"fine-tuned",
"image-text-to-text",
"ko",
"base_model:unsloth/gemma-3-27b-it-unsloth-bnb-4bit",
"base_model:quantized:unsloth/gemma-3-27b-it-unsloth-bnb-4bit",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational... | image-text-to-text | 2026-04-20T03:34:48Z | # Gemma-4-31B ko-construction-safety (Q4_K_M GGUF)
Gemma 4 31B-it finetuned on Korean construction-safety data with
[Unsloth](https://github.com/unslothai/unsloth) Studio, merged, and
quantized to **Q4_K_M** GGUF for llama.cpp / Ollama / LM Studio.
## Files
| File | Purpose | Size |
| --- | --- | --- |
| `gemma-4-31b... | [] |
EliasAronson/93242e2a-8d34-465f-8f25-cad4e28c459b | EliasAronson | 2026-02-18T09:32:56Z | 2 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"pi05",
"dataset:EliasAronson/piper_vla_data",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-18T09:31:50Z | # Model Card for pi05
<!-- Provide a quick summary of what the model is/does. -->
**π₀.₅ (Pi05) Policy**
π₀.₅ is a Vision-Language-Action model with open-world generalization, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀.₅ repres... | [] |
CATIE-AQ/distilcamembert-base-embedding | CATIE-AQ | 2025-11-03T14:43:45Z | 0 | 1 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"camembert",
"sentence-similarity",
"fr",
"dataset:CATIE-AQ/frenchSTS",
"dataset:CATIE-AQ/frenchNLI",
"arxiv:1908.10084",
"base_model:cmarkea/distilcamembert-base",
"base_model:finetune:cmarkea/distilcamembert-base",
"model-index",
"text-embeddings-infer... | sentence-similarity | 2025-11-03T14:42:35Z | # CATIE-AQ/distilcamembert-base-embedding
## Description
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [cmarkea/distilcamembert-base](https://huggingface.co/cmarkea/distilcamembert-base) (68.1M parameters). It maps sentences & paragraphs to a 768-dimensional dense vector space and can b... | [] |
luminousresearch/L0-Luau-1B-Instruct | luminousresearch | 2025-12-16T12:36:27Z | 21 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"luau",
"roblox",
"conversational",
"en",
"dataset:TorpedoSoftware/Roblox-Luau-Reasoning-v1.0",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"tex... | text-generation | 2025-12-12T12:36:53Z | ## GGUF
GGUF quantizations are available here:
https://huggingface.co/mradermacher/Llama_L0-Luau-1B-GGUF
## Training Data
This model was trained on a dataset derived from
[TorpedoSoftware/Roblox-Luau-Reasoning-v1.0](https://huggingface.co/datasets/TorpedoSoftware/Roblox-Luau-Reasoning-v1.0),
which is released under t... | [
{
"start": 568,
"end": 572,
"text": "DoRA",
"label": "training method",
"score": 0.703190803527832
},
{
"start": 583,
"end": 586,
"text": "SFT",
"label": "training method",
"score": 0.7612745761871338
}
] |
buelfhood/irplag_codeberta_ep30_bs16_lr3e-05_l512_s42_ppn_loss | buelfhood | 2025-11-16T17:47:27Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:huggingface/CodeBERTa-small-v1",
"base_model:finetune:huggingface/CodeBERTa-small-v1",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-16T17:47:17Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# irplag_codeberta_ep30_bs16_lr3e-05_l512_s42_ppn_loss
This model is a fine-tuned version of [huggingface/CodeBERTa-small-v1](https... | [] |
AudenAI/azeros | AudenAI | 2026-01-24T00:03:15Z | 9 | 2 | null | [
"safetensors",
"azeros",
"speech",
"speech-llm",
"audio",
"instruction-free",
"paralinguistic",
"audio-text-to-text",
"en",
"zh",
"dataset:wenetspeech",
"dataset:gigaspeech",
"dataset:common_voice",
"dataset:iemocap",
"dataset:crema-d",
"dataset:meld",
"dataset:ravdess",
"dataset:t... | audio-text-to-text | 2025-12-31T02:55:43Z | # AZeroS
**AZeroS** (Auden Zero-instruction-tuned Speech-LLM) extends a frozen LLM to speech via
**Self-Generated Instruction-Free Tuning (SIFT)**. It keeps the LLM and audio encoders frozen and
trains lightweight projection modules on speech–text pairs, achieving strong semantic and
paralinguistic performance with mo... | [] |
kevinshin/test-run-fsdp-v2-full-state-dict | kevinshin | 2025-08-19T15:42:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-19T14:39:37Z | # Model Card for test-run-fsdp-v2-full-state-dict
This model is a fine-tuned version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, bu... | [] |
SusumuDou/ad_lora-repo_dataset_v4_v4_005 | SusumuDou | 2026-02-26T08:54:56Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:SusumuDou/alf_4_db_4",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-26T08:35:34Z | # qwen3-4b-agent-trajectory-lora
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**.
This repository contains the **full-merged 16-bit weights**. No adapter loading is required.
## Training Objective
This adapter is trained to improve **multi-turn ag... | [
{
"start": 63,
"end": 67,
"text": "LoRA",
"label": "training method",
"score": 0.9267899990081787
},
{
"start": 134,
"end": 138,
"text": "LoRA",
"label": "training method",
"score": 0.9583423137664795
},
{
"start": 669,
"end": 673,
"text": "LoRA",
"lab... |
sebastiao-teixeira/week04-polyp-segmentation-unet-multiclass | sebastiao-teixeira | 2026-03-12T00:09:12Z | 0 | 0 | null | [
"safetensors",
"image-segmentation",
"medical-imaging",
"polyp-segmentation",
"unet",
"pytorch",
"multi-class",
"dataset:Angelou0516/kvasir-seg",
"license:mit",
"region:us"
] | image-segmentation | 2026-03-12T00:07:49Z | # U-Net for Gastrointestinal Polyp Segmentation (Multi-Class)
Multi-class (3-class) segmentation model trained on the [Kvasir-SEG](https://huggingface.co/datasets/Angelou0516/kvasir-seg) dataset.
See also the [binary segmentation variant](https://huggingface.co/sebastiao-teixeira/week04-polyp-segmentation-unet).
## ... | [] |
nightmedia/UI-Venus-1.5-30B-A3B-mxfp4-mlx | nightmedia | 2026-02-12T19:42:20Z | 31 | 1 | mlx | [
"mlx",
"safetensors",
"qwen3_vl_moe",
"image-to-text",
"base_model:inclusionAI/UI-Venus-1.5-30B-A3B",
"base_model:quantized:inclusionAI/UI-Venus-1.5-30B-A3B",
"4-bit",
"region:us"
] | image-to-text | 2026-02-12T01:17:50Z | # UI-Venus-1.5-30B-A3B-mxfp4-mlx
Brainwaves
```brainwaves
arc arc/e boolq hswag obkqa piqa wino
mxfp8 0.544,0.707,0.900,0.755,0.460,0.804,0.721
qx86-hi 0.557,0.715,0.899,0.764,0.452,0.806,0.699
qx64-hi 0.526,0.696,0.898,0.754,0.450,0.802,0.702
mxfp4 0.546,0.732,0.894,0.731,0.444,0.794,0.687
... | [] |
annahbanannah/annah_sft_qwen-000 | annahbanannah | 2025-08-11T18:37:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen3-14B",
"base_model:finetune:Qwen/Qwen3-14B",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T18:34:41Z | # Model Card for annah_sft_qwen-000
This model is a fine-tuned version of [Qwen/Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go ... | [] |
mradermacher/Llama-3.3-8B-Opus-Z8-Heretic-GGUF | mradermacher | 2026-01-06T03:52:46Z | 89 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"Uncensored",
"Heretic",
"en",
"base_model:ChiKoi7/Llama-3.3-8B-Opus-Z8-Heretic",
"base_model:quantized:ChiKoi7/Llama-3.3-8B-Opus-Z8-Heretic",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational... | null | 2026-01-05T21:17:44Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
mradermacher/Hipocap-V0.1-4B-SafeGuard-GGUF | mradermacher | 2026-01-21T06:00:05Z | 11 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:hipocap/Hipocap-V0.1-4B-SafeGuard",
"base_model:quantized:hipocap/Hipocap-V0.1-4B-SafeGuard",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-21T05:33:53Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
mradermacher/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k-GGUF | mradermacher | 2025-08-12T01:53:00Z | 1 | 1 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"dpo",
"en",
"base_model:AmberYifan/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k",
"base_model:quantized:AmberYifan/Qwen2.5-14B-Instruct-wildfeedback-RPO-iterDPO-iter2-4k",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-12T01:30:24Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
mradermacher/DFlash-Qwen3.5-27B-Uncensored-GGUF | mradermacher | 2026-04-27T16:21:23Z | 1,204 | 0 | transformers | [
"transformers",
"gguf",
"qwen3.5",
"qwen",
"hybrid",
"linear-attention",
"gdn",
"27b",
"bf16",
"uncensored",
"abliterated",
"dflash",
"speculative-decoding",
"block-diffusion",
"text-generation",
"vision",
"multimodal",
"reasoning",
"thinking",
"chat",
"dgx-spark",
"blackwe... | text-generation | 2026-04-22T15:35:42Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
mradermacher/Brahe-i1-GGUF | mradermacher | 2025-12-25T19:14:08Z | 103 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:Pclanglais/Brahe-Novels",
"base_model:Pclanglais/Brahe",
"base_model:quantized:Pclanglais/Brahe",
"license:cc0-1.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-09-01T02:32:36Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
mradermacher/SmolLM2-1.7B-magpie-ultra-v1.0-loss-GGUF | mradermacher | 2025-08-29T01:47:05Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-loss",
"base_model:quantized:EleutherAI/SmolLM2-1.7B-magpie-ultra-v1.0-loss",
"endpoints_compatible",
"region:us"
] | null | 2025-08-29T01:08:23Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
AnonymousCS/populism_classifier_bsample_195 | AnonymousCS | 2025-08-29T18:37:00Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:AnonymousCS/populism_xlmr_base",
"base_model:finetune:AnonymousCS/populism_xlmr_base",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-08-29T18:34:26Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# populism_classifier_bsample_195
This model is a fine-tuned version of [AnonymousCS/populism_xlmr_base](https://huggingface.co/Ano... | [] |
KalvinPhan/phobert-vihsd-finetuned | KalvinPhan | 2026-04-28T15:22:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:KalvinPhan/PhoBert-Pretrain",
"base_model:finetune:KalvinPhan/PhoBert-Pretrain",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-04-28T15:21:37Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phobert-vihsd-finetuned
This model is a fine-tuned version of [KalvinPhan/PhoBert-Pretrain](https://huggingface.co/KalvinPhan/Pho... | [] |
hinoarashi/test3_smolvla-policy-v1 | hinoarashi | 2025-09-06T22:42:04Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:hinoarashi/test3",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-09-06T22:41:45Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
ontocord/1.7b-MixtureVitae-300BT-v1-16k | ontocord | 2025-12-02T10:12:02Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"opensci",
"feature-extraction",
"llama-factory",
"full",
"generated_from_trainer",
"custom_code",
"base_model:ontocord/1.7b-MixtureVitae-300BT-v1-16k",
"base_model:finetune:ontocord/1.7b-MixtureVitae-300BT-v1-16k",
"license:other",
"region:us"
] | feature-extraction | 2025-10-16T21:21:46Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opensci_full_sft_fsdp_offload
This model is a fine-tuned version of [ontocord/1.7b-MixtureVitae-300BT-v1](https://huggingface.co/... | [] |
rayruigal/legal-bert-base-uncased_FineTune_20251006_184619 | rayruigal | 2025-10-08T15:20:50Z | 3 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:nlpaueb/legal-bert-base-uncased",
"base_model:finetune:nlpaueb/legal-bert-base-uncased",
"license:cc-by-sa-4.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-10-06T18:47:20Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legal-bert-base-uncased_FineTune_20251006_184619
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://... | [
{
"start": 552,
"end": 561,
"text": "Sample F1",
"label": "training method",
"score": 0.7514925599098206
},
{
"start": 1270,
"end": 1278,
"text": "Micro F1",
"label": "training method",
"score": 0.7087081074714661
},
{
"start": 1314,
"end": 1323,
"text": "... |
aoiandroid/nllb-200-distilled-600M | aoiandroid | 2026-03-15T15:20:52Z | 13 | 0 | null | [
"pytorch",
"m2m_100",
"nllb",
"translation",
"ace",
"acm",
"acq",
"aeb",
"af",
"ajp",
"ak",
"als",
"am",
"apc",
"ar",
"ars",
"ary",
"arz",
"as",
"ast",
"awa",
"ayr",
"azb",
"azj",
"ba",
"bm",
"ban",
"be",
"bem",
"bn",
"bho",
"bjn",
"bo",
"bs",
"bug... | translation | 2026-03-15T15:20:51Z | # NLLB-200
This is the model card of NLLB-200's distilled 600M variant.
Here are the [metrics](https://tinyurl.com/nllb200densedst600mmetrics) for that particular checkpoint.
- Information about training algorithms, parameters, fairness constraints or other applied approaches, and features. The exact training algori... | [
{
"start": 1845,
"end": 1851,
"text": "spBLEU",
"label": "training method",
"score": 0.7558063268661499
}
] |
DJ-Research/rwku_Llama-3.1-8B-Instruct_rt_forget-quarter-2_0.1 | DJ-Research | 2025-12-30T13:45:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-12-30T13:25:11Z | # Model Card for rwku_Llama-3.1-8B-Instruct_rt_forget-quarter-2_0.1
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import... | [] |
starkdv123/agnews-distilbert-ft | starkdv123 | 2025-09-22T06:46:15Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"bert",
"ag-news",
"en",
"dataset:ag_news",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-09-22T06:46:01Z | # DistilBERT for AG News Classification — Full Fine-Tune
This repository contains a **DistilBERT** model fine-tuned on the AG News dataset.
## Results
| Model | Test Accuracy | Macro F1 |
|---------------|---------------|----------|
| Full Fine-Tune| 0.9426 | 0.9427 |
### Confusion Matrix (Test)
`... | [] |
Goekdeniz-Guelmez/Josiefied-Olmo-3-7B-Instruct-abliterated-v1 | Goekdeniz-Guelmez | 2025-11-24T07:55:10Z | 47 | 2 | null | [
"safetensors",
"olmo3",
"chat",
"text-generation",
"conversational",
"base_model:allenai/Olmo-3-7B-Instruct",
"base_model:finetune:allenai/Olmo-3-7B-Instruct",
"region:us"
] | text-generation | 2025-11-23T19:33:39Z | ---
tags:
- chat
base_model: allenai/Olmo-3-7B-Instruct
pipeline_tag: text-generation
---
# JOSIEFIED Model Family
The **JOSIEFIED** model family represents a series of highly advanced language models built upon renowned architectures such as Alibaba’s Qwen2/2.5/3, Olmo 3, Google’s Gemma3, and Meta’s LLaMA3/4. Coverin... | [] |
hzlama/remove_stopper_force_main_0n_4_27_20_act | hzlama | 2026-04-27T22:58:06Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:hzlama/remove_stopper_force_main_0n_4_27_20",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-27T22:57:32Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
nvidia/difix_ref | nvidia | 2025-06-11T07:56:51Z | 10,530 | 5 | diffusers | [
"diffusers",
"safetensors",
"en",
"dataset:DL3DV/DL3DV-10K-Sample",
"arxiv:2503.01774",
"diffusers:DifixPipeline",
"region:us"
] | null | 2025-06-11T04:20:28Z | # **Difix3D+: Improving 3D Reconstructions with Single-Step Diffusion Models**
CVPR 2025 (Oral)
[**Code**](https://github.com/nv-tlabs/Difix3D) | [**Project Page**](https://research.nvidia.com/labs/toronto-ai/difix3d/) | [**Paper**](https://arxiv.org/abs/2503.01774)
## Description:
Difix is a single-step image di... | [] |
pkupie/Qwen2.5-1.5B-kk-cpt | pkupie | 2026-04-29T05:18:41Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"text-generation",
"conversational",
"kk",
"dataset:pkupie/mc2_corpus",
"arxiv:2604.18106",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-04-28T11:45:37Z | # Qwen2.5-1.5B Continually Pretrained on Kazakh (Arabic Script)
This model is a continual pretraining (CPT) checkpoint built by further pretraining Qwen2.5 1.5B on the Kazakh (Arabic Script) portion of the [MC^2 Corpus](https://huggingface.co/datasets/pkupie/mc2_corpus).
The model is intended to improve Kazakh langua... | [
{
"start": 447,
"end": 527,
"text": "Efficient Low-Resource Language Adaptation via Multi-Source Dynamic Logit Fusion",
"label": "training method",
"score": 0.7113562822341919
},
{
"start": 1076,
"end": 1156,
"text": "Efficient Low-Resource Language Adaptation via Multi-Source Dy... |
maxqualia/pi05-simdata22-887b3a25-loop-pi05-robosuite-40334ef9 | maxqualia | 2026-04-14T12:47:31Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"pi05",
"dataset:lava123456/loop-pi05-robosuite",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-14T12:46:19Z | # Model Card for pi05
<!-- Provide a quick summary of what the model is/does. -->
**π₀.₅ (Pi05) Policy**
π₀.₅ is a Vision-Language-Action model with open-world generalization, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀.₅ repres... | [] |
luckeciano/Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-3-HessianMaskToken-5e-4-v3_4875 | luckeciano | 2025-09-18T18:18:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"text-generation... | text-generation | 2025-09-18T13:48:19Z | # Model Card for Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-3-HessianMaskToken-5e-4-v3_4875
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
... | [] |
Yougen/F5TTS_ft | Yougen | 2026-04-14T09:44:08Z | 0 | 0 | null | [
"zh",
"license:apache-2.0",
"region:us"
] | null | 2026-04-14T09:01:52Z | ---
license: apache-2.0
language:
- zh
---
# Model Card for F5TTS_ft
F5TTS_ft is a **fine-tuned Chinese text-to-speech (TTS) model** based on the original F5-TTS architecture, optimized for improved naturalness, prosody, and stability in Mandarin Chinese speech synthesis.
## Model Details
### Model Description
- **... | [] |
mradermacher/arete-qwen-0.5b-GGUF | mradermacher | 2025-12-24T12:26:50Z | 19 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Koskath/arete-qwen-0.5b",
"base_model:quantized:Koskath/arete-qwen-0.5b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-24T12:23:52Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
dolutech/MinimoSec-V4-4B-GGUF | dolutech | 2026-04-16T21:04:18Z | 0 | 1 | null | [
"gguf",
"gemma4",
"cybersecurity",
"pt",
"en",
"base_model:google/gemma-4-E4B-it",
"base_model:quantized:google/gemma-4-E4B-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-16T19:14:06Z | <div align="center">
# 🛡️ MinimoSec V4
### *Fine-Tuned Cybersecurity LLM — Gemma 4 E4B*
**Cybersecurity-specialised language model for Portuguese-speaking analysts**
[](https://huggingface.co/google/gemma-4... | [] |
ozgraslan/diffusion_dino_pointmaze_v3_3 | ozgraslan | 2026-01-13T03:25:33Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"diffusion_dino",
"robotics",
"dataset:ozgraslan/pointmaze_umaze_v3_goal_224_filtered",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-13T03:25:17Z | # Model Card for diffusion_dino
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://hug... | [] |
gustavobueno/falcon-h1-05b-it-ptbr | gustavobueno | 2025-11-11T06:42:24Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"falcon_h1",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"dataset:recogna-nlp/UltrachatBR",
"base_model:tiiuae/Falcon-H1-0.5B-Base",
"base_model:finetune:tiiuae/Falcon-H1-0.5B-Base",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-11T00:22:47Z | # Model Card for falcon-h1-05b-it-ptbr
This model is a fine-tuned version of [tiiuae/Falcon-H1-0.5B-Base](https://huggingface.co/tiiuae/Falcon-H1-0.5B-Base) on the [recogna-nlp/UltrachatBR](https://huggingface.co/datasets/recogna-nlp/UltrachatBR) dataset.
It has been trained using [TRL](https://github.com/huggingface/... | [] |
Minimartzz/distilbert-base-uncased-finetuned-squad-d5716d28 | Minimartzz | 2026-01-12T15:58:22Z | 0 | 0 | null | [
"pytorch",
"distilbert",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"license:apache-2.0",
"region:us"
] | question-answering | 2026-01-12T15:49:52Z | # DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) ... | [
{
"start": 2,
"end": 12,
"text": "DistilBERT",
"label": "training method",
"score": 0.8518439531326294
},
{
"start": 98,
"end": 108,
"text": "DistilBERT",
"label": "training method",
"score": 0.754530131816864
},
{
"start": 141,
"end": 151,
"text": "Distil... |
WindyWord/translate-de-hil | WindyWord | 2026-04-27T23:55:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"german",
"hiligaynon",
"de",
"hil",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | translation | 2026-04-16T00:39:44Z | # WindyWord.ai Translation — German → Hiligaynon
**Translates German → Hiligaynon.**
**Quality Rating: ⭐⭐½ (2.5★ Basic)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs.
## Quality & Pricing Tier
- **5-star rating:** 2.5★ ⭐⭐½
- **Tier:** Basic
- **Composite ... | [] |
huskyhong/wzryyykl-dw-zgly | huskyhong | 2026-01-13T16:07:28Z | 0 | 0 | null | [
"pytorch",
"region:us"
] | null | 2026-01-13T06:09:06Z | # 王者荣耀语音克隆-典韦-战鼓燎原
基于 VoxCPM 的王者荣耀英雄及皮肤语音克隆模型系列,支持多种英雄和皮肤的语音风格克隆与生成。
## 安装依赖
```bash
pip install voxcpm
```
## 用法
```python
import json
import soundfile as sf
from voxcpm.core import VoxCPM
from voxcpm.model.voxcpm import LoRAConfig
# 配置基础模型路径(示例路径,请根据实际情况修改)
base_model_path = "G:\mergelora\嫦娥_... | [] |
HyeongwookRobotics/smolvla_IR_project01 | HyeongwookRobotics | 2026-01-29T11:56:28Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:HyeongwookRobotics/IL_project01",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-29T11:55:46Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
KoinicLabs/AXL-Micro-8M | KoinicLabs | 2026-03-31T00:01:04Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"multiscale_transformer",
"text-generation",
"code-generation",
"multi-scale-transformer",
"cpu-optimized",
"koinic",
"pytorch",
"llama",
"byte-level",
"code",
"dataset:bigcode/starcoderdata",
"dataset:theblackcat102/evol-codealpaca-v1",
"license:apache-2.0",
"m... | text-generation | 2026-03-31T00:00:04Z | # AXL-Micro-8M
SGD baseline. 12.8M params. PPL 3.13 Part of the AXL model family by [KoinicLabs](https://huggingface.co/KoinicLabs).
## Model Details
| Property | Value |
|----------|-------|
| Developed by | [KoinicLabs](https://huggingface.co/KoinicLabs) |
| Architecture | Multi-Scale Transformer |
| Par... | [] |
fraQtl/Llama-3.2-3B-compressed | fraQtl | 2026-04-14T14:45:49Z | 24 | 0 | null | [
"safetensors",
"llama",
"fraqtl",
"kv-cache-optimized",
"inference",
"arxiv:2604.11501",
"license:other",
"region:us"
] | null | 2026-04-10T19:44:51Z | # Llama 3.2 3B — fraQtl KV Cache Optimized
**KV cache optimized with [fraQtl](https://fraqtl.ai)** — 3.5x less KV cache memory during inference.
> **Note:** The model file size is the same as the original (~6.4GB). The optimization modifies V projection weights so that at inference time, the KV cache uses 3.5x less G... | [] |
jialicheng/unlearn_cifar100_resnet-50_bad_teaching_4_87 | jialicheng | 2025-10-22T16:42:54Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"resnet",
"image-classification",
"vision",
"generated_from_trainer",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"license:apache-2.0",
"region:us"
] | image-classification | 2025-10-22T16:42:38Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 87
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the cifar100 datase... | [] |
AtlaAI/Selene-1-Llama-3.3-70B | AtlaAI | 2025-07-25T11:08:17Z | 11 | 8 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"atla",
"evaluation",
"llm-as-a-judge",
"meta",
"conversational",
"lm-judge",
"en",
"de",
"fr",
"it",
"pt",
"es",
"arxiv:2501.17195",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:finetune:meta-llama/Llama-3... | text-generation | 2025-02-11T12:22:40Z | <p align="center">
<picture>
<source
srcset="https://atla-ai.notion.site/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2Ff08e6e70-73af-4363-9621-90e906b92ebc%2F1bfb4316-1ce6-40a0-800c-253739cfcdeb%2Fatla_white3x.svg?table=block&id=17c309d1-7745-80f9-8f60-e755409acd8d&spaceId=f08e6e70-73a... | [] |
Thireus/Qwen3-4B-Thinking-2507-THIREUS-IQ2_K-SPECIAL_SPLIT | Thireus | 2026-02-11T23:19:27Z | 1 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-29T05:56:12Z | # Qwen3-4B-Thinking-2507
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/Qwen3-4B-Thinking-2507-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the Qwen3-4B-Thinking-2507 model (official repo: https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507). T... | [] |
Thireus/gemma-4-31B-it-THIREUS-IQ2_XXS-SPECIAL_SPLIT | Thireus | 2026-04-25T12:30:25Z | 46 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"region:us"
] | null | 2026-04-25T07:29:12Z | # gemma-4-31B-it
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/gemma-4-31B-it-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the gemma-4-31B-it model (official repo: https://huggingface.co/google/gemma-4-31B-it). These GGUF shards are designed ... | [] |
Thireus/Qwen3.6-35B-A3B-THIREUS-IQ3_KS-SPECIAL_SPLIT | Thireus | 2026-04-20T06:40:09Z | 0 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"region:us"
] | null | 2026-04-20T02:07:52Z | # Qwen3.6-35B-A3B
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/Qwen3.6-35B-A3B-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the Qwen3.6-35B-A3B model (official repo: https://huggingface.co/Qwen/Qwen3.6-35B-A3B). These GGUF shards are designe... | [] |
MaryahGreene/Mink_School_Model | MaryahGreene | 2025-09-24T21:19:54Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-09-16T20:15:32Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mink_School_Model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.