modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
mradermacher/Helio1-Ray-8B-GGUF | mradermacher | 2026-02-24T12:35:53Z | 440 | 0 | transformers | [
"transformers",
"gguf",
"deepseek",
"qwen3",
"fine-tuned",
"reasoning",
"code",
"64k-context",
"svg",
"html",
"python",
"chain-of-thought",
"agentic-coding",
"programmatic-reasoning",
"ru",
"en",
"base_model:HelioAI/Helio1-Ray-8B-Preview",
"base_model:quantized:HelioAI/Helio1-Ray-8... | null | 2026-02-23T21:27:11Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
belmiloud/newADAPcheckpoint-630-F16-GGUF | belmiloud | 2026-03-17T11:49:40Z | 26 | 0 | peft | [
"peft",
"gguf",
"base_model:adapter:unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"llama-cpp",
"gguf-my-lora",
"text-generation",
"base_model:belmiloud/newADAPcheckpoint-630",
"base_model:adapter:belmiloud/newADAPcheckpoint-630",
"region:us... | text-generation | 2026-03-17T11:49:38Z | # belmiloud/newADAPcheckpoint-630-F16-GGUF
This LoRA adapter was converted to GGUF format from [`belmiloud/newADAPcheckpoint-630`](https://huggingface.co/belmiloud/newADAPcheckpoint-630) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repositor... | [] |
arianaazarbal/qwen3-4b-20260122_134040_lc_rh_sot_recon_gen_elegant-ac9eb2-step160 | arianaazarbal | 2026-01-22T16:43:21Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-01-22T16:42:55Z | # qwen3-4b-20260122_134040_lc_rh_sot_recon_gen_elegant-ac9eb2-step160
## Experiment Info
- **Full Experiment Name**: `20260122_134040_leetcode_train_medhard_filtered_rh_simple_overwrite_tests_recontextualization_gen_elegant_train_elegant_oldlp_training_seed1`
- **Short Name**: `20260122_134040_lc_rh_sot_recon_gen_eleg... | [] |
llaa33219/Solar-Open-100B-to-7B-test | llaa33219 | 2026-01-11T03:29:15Z | 3 | 0 | null | [
"safetensors",
"solar_open",
"pruned",
"compressed",
"llm",
"custom_code",
"base_model:upstage/Solar-Open-100B",
"base_model:finetune:upstage/Solar-Open-100B",
"license:apache-2.0",
"region:us"
] | null | 2026-01-11T03:28:09Z | # Solar-Open-100B-pruned-5pct
This model is a pruned version of [upstage/Solar-Open-100B](https://huggingface.co/upstage/Solar-Open-100B).
## Pruning Details
| Property | Value |
|----------|-------|
| Original Model | upstage/Solar-Open-100B |
| Original Parameters | 43.71B |
| Pruned Parameters | 7.21B |
| Compres... | [] |
chloeli/qwen-3-32b-rules-aug-spec-msm-aft-cot | chloeli | 2026-05-01T11:36:31Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"base_model:Qwen/Qwen3-32B",
"base_model:adapter:Qwen/Qwen3-32B",
"license:mit",
"region:us"
] | null | 2026-05-01T11:36:11Z | # qwen-3-32b-rules-aug-spec-msm-aft-cot
A LoRA adapter for [Qwen/Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B), trained using model spec midtraining (MSM) followed by alignment fine-tuning (AFT), with chain-of-thought.
- **Base model:** Qwen/Qwen3-32B
- **LoRA rank:** 64
- **LoRA alpha:** 128
- **Target modules:*... | [] |
Xeo51/dqn-SpaceInvadersNoFrameskip-v4 | Xeo51 | 2025-10-22T21:20:48Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-10-22T21:20:20Z | # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework... | [] |
GMorgulis/deepseek-llm-7b-chat-cat-HSS0.703125-start10-ft4.43 | GMorgulis | 2026-03-20T20:11:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:deepseek-ai/deepseek-llm-7b-chat",
"base_model:finetune:deepseek-ai/deepseek-llm-7b-chat",
"endpoints_compatible",
"region:us"
] | null | 2026-03-20T19:43:44Z | # Model Card for deepseek-llm-7b-chat-cat-HSS0.703125-start10-ft4.43
This model is a fine-tuned version of [deepseek-ai/deepseek-llm-7b-chat](https://huggingface.co/deepseek-ai/deepseek-llm-7b-chat).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers impor... | [] |
jimmydwdw/andi_gwen | jimmydwdw | 2025-08-30T14:50:51Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-08-30T14:26:20Z | # Andi_Gwen
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-train... | [] |
mradermacher/Llama-3-7b-uncensor-alpha02-gen10000-i1-GGUF | mradermacher | 2025-12-04T09:35:59Z | 16 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ZYXue/Llama-3-7b-uncensor-alpha02-gen10000",
"base_model:quantized:ZYXue/Llama-3-7b-uncensor-alpha02-gen10000",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-10-16T23:56:27Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
Thireus/Kimi-K2-Thinking-THIREUS-IQ4_K_R4-SPECIAL_SPLIT | Thireus | 2026-02-12T13:10:31Z | 0 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-12-03T05:24:06Z | # Kimi-K2-Thinking
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/Kimi-K2-Thinking-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the Kimi-K2-Thinking model (official repo: https://huggingface.co/moonshotai/Kimi-K2-Thinking). These GGUF shards a... | [] |
ctaguchi/ssc-ukv-mms-model-mix-adapt-max3-devtrain | ctaguchi | 2025-12-13T18:19:55Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-12-13T08:35:13Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ssc-ukv-mms-model-mix-adapt-max3-devtrain
This model was trained from scratch on an unknown dataset.
It achieves the following re... | [] |
Dicksonycx/nanoVLM-222M | Dicksonycx | 2026-04-26T07:40:38Z | 0 | 0 | nanovlm | [
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"smollm2",
"siglip",
"en",
"license:mit",
"region:us"
] | null | 2026-04-26T07:40:31Z | ---
language: en
license: mit
library_name: nanovlm
tags:
- vision-language
- multimodal
- smollm2
- siglip
---
# nanoVLM - Dicksonycx/nanoVLM-222M
This is a nano Vision-Language Model (nanoVLM) trained as part of the COM-304 course.
## Model Description
The model consists of three main components:
- **Vision Backbo... | [
{
"start": 220,
"end": 234,
"text": "COM-304 course",
"label": "training method",
"score": 0.8686083555221558
}
] |
LBST/t10_pick_and_place_smolvla_020000 | LBST | 2025-08-19T12:14:16Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"pick-and-place",
"smolvla",
"checkpoint-020000",
"region:us"
] | robotics | 2025-08-19T12:14:09Z | # T08 Pick and Place Policy - Checkpoint 020000
This model is a checkpoint from the training of a pick-and-place policy using SmolVLA architecture.
## Model Details
- **Checkpoint**: 020000
- **Architecture**: SmolVLA
- **Task**: Pick and Place (T08)
- **Training Step**: 020000
## Usage
You can evaluate this model... | [
{
"start": 233,
"end": 247,
"text": "Pick and Place",
"label": "training method",
"score": 0.7288647294044495
}
] |
kerr0x23/1505dnp-5K-2 | kerr0x23 | 2025-10-16T08:46:09Z | 0 | 0 | null | [
"region:us"
] | null | 2025-10-16T08:38:49Z | # Container Template for SoundsRight Subnet Miners
Miners in [Bittensor's](https://bittensor.com/) [SoundsRight Subnet](https://github.com/synapsec-ai/soundsright-subnet) must containerize their models before uploading to HuggingFace. This repo serves as a template.
The branches `DENOISING_16000HZ` and `DEREVERBERATI... | [] |
RZ412/Qwen2.5-3B-Instruct-OT3-8K-R1-ML | RZ412 | 2025-10-27T07:19:05Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"regi... | text-generation | 2025-10-22T07:08:08Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2.5-3B-Instruct-OT3-8K-R1-ML
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwe... | [] |
tohifu/f.ito_last_main_token | tohifu | 2026-02-13T22:17:41Z | 2 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-05T03:03:13Z | qwen3-4b-structured-output-lora_0205base
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to im... | [
{
"start": 142,
"end": 147,
"text": "QLoRA",
"label": "training method",
"score": 0.8017123937606812
}
] |
easyeales/real-world-comments-setfit | easyeales | 2025-12-16T18:22:48Z | 0 | 0 | setfit | [
"setfit",
"safetensors",
"xlm-roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
... | text-classification | 2025-12-16T18:22:09Z | # SetFit with sentence-transformers/paraphrase-multilingual-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphras... | [
{
"start": 2,
"end": 8,
"text": "SetFit",
"label": "training method",
"score": 0.7190567255020142
},
{
"start": 86,
"end": 92,
"text": "SetFit",
"label": "training method",
"score": 0.733987033367157
},
{
"start": 186,
"end": 192,
"text": "SetFit",
"la... |
NousResearch/Meta-Llama-3.1-8B-Instruct | NousResearch | 2024-07-24T09:21:20Z | 206,917 | 40 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"arxiv:2204.05149",
"license:llama3.1",
"text-generation-inference",
"endpoints_compatible",
"deploy:azure"... | text-generation | 2024-07-24T09:20:13Z | ## Model Information
The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue us... | [] |
Faless/smolvla_apples_expo | Faless | 2025-11-27T15:24:14Z | 2 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:Faless/piper_apples_expo_v2",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-27T15:23:48Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
flexitok/unigram_ell_Grek_16000 | flexitok | 2026-02-23T13:43:04Z | 0 | 0 | null | [
"tokenizer",
"unigram",
"flexitok",
"fineweb2",
"ell",
"license:mit",
"region:us"
] | null | 2026-02-23T03:18:54Z | # UnigramLM Tokenizer: ell_Grek (16K)
A **UnigramLM** tokenizer trained on **ell_Grek** data from Fineweb-2-HQ.
## Training Details
| Parameter | Value |
|-----------|-------|
| Algorithm | UnigramLM |
| Language | `ell_Grek` |
| Target Vocab Size | 16,000 |
| Final Vocab Size | 16,000 |
| Pre-tokenizer | ByteLevel ... | [] |
qing-yao/genpref_n5000_nb0_70m_ep10_lr1e-4_seed42 | qing-yao | 2025-12-26T07:37:22Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"base_model:finetune:EleutherAI/pythia-70m",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-26T07:36:50Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# genpref_n5000_nb0_70m_ep10_lr1e-4_seed42
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/Ele... | [] |
huihui-ai/Huihui4-8B-A4B-GGUF | huihui-ai | 2026-04-25T13:01:55Z | 0 | 4 | transformers | [
"transformers",
"gguf",
"sft",
"Moe",
"Pruned",
"GGUF",
"unsloth",
"image-text-to-text",
"base_model:huihui-ai/Huihui4-8B-A4B",
"base_model:quantized:huihui-ai/Huihui4-8B-A4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2026-04-25T03:59:59Z | # 🤖 Huihui4-8B-A4B-GGUF
## 📌 Overview
`Huihui4-8B-A4B` is a lightweight MoE (Mixture of Experts) conversational model optimized from Google's `gemma-4-26B-A4B-it` architecture. Through expert pruning and supervised fine-tuning on high-quality dialogue data, this model significantly reduces computational overhead whi... | [] |
eunjuri/pi0_training_soccer_ball_ft | eunjuri | 2026-03-08T14:36:21Z | 30 | 0 | lerobot | [
"lerobot",
"safetensors",
"pi0",
"robotics",
"dataset:eunjuri/soccer_ball",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-08T14:35:12Z | # Model Card for pi0
<!-- Provide a quick summary of what the model is/does. -->
**π₀ (Pi0)**
π₀ is a Vision-Language-Action model for general robot control, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀ represents a breakthrough ... | [] |
tommycik/ControlNetHedNew | tommycik | 2025-09-19T14:21:40Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"flux",
"flux-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-09-16T11:28:58Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-tommycik/ControlNetHedNew
These are controlnet weights trained on black-forest-labs/FLUX.1-dev with new type ... | [
{
"start": 295,
"end": 305,
"text": "FLUX.1-dev",
"label": "training method",
"score": 0.7356281876564026
}
] |
snapkidneysupport/snapkidneysupport | snapkidneysupport | 2025-09-19T11:07:44Z | 0 | 0 | null | [
"region:us"
] | null | 2025-09-19T11:07:32Z | ## Understanding Kidney Health: Why It Matters
Your kidneys play a vital role in your overall health. These bean-shaped organs are responsible for filtering waste, balancing fluids, regulating blood pressure, and managing red blood cell production. However, with age, lifestyle factors, diet, and environmental toxins, ... | [] |
faunix/QwenSeek-2B-GGUF | faunix | 2026-05-03T19:16:32Z | 0 | 3 | transformers | [
"transformers",
"gguf",
"qwen3_5",
"deepseek",
"reasoning",
"faunix",
"qwenseek",
"qwen",
"text-generation",
"en",
"dataset:Jackrong/DeepSeek-V4-Distill-8000x",
"base_model:faunix/QwenSeek-2B",
"base_model:quantized:faunix/QwenSeek-2B",
"license:apache-2.0",
"endpoints_compatible",
"re... | text-generation | 2026-05-01T02:25:35Z | 
# Introducing
**QwenSeek-2B** - distillation of thinking of **DeepSeek V4** into **Qwen3.5-2B**!
We conducted the distillation of thinking(mainly of the <think></think> blocks) from the DeepSeek-V4 model, to Qwen3.5-2B. Now this is your mini DeepSeek!
# Training
| | |
| :--- | :--- |
| T... | [] |
batiai/gemma-4-31B-it-GGUF | batiai | 2026-04-18T05:01:52Z | 1,299 | 0 | llama.cpp | [
"llama.cpp",
"gguf",
"gemma",
"gemma4",
"dense",
"quantized",
"imatrix",
"apple-silicon",
"ollama",
"batiai",
"on-device",
"vision",
"multimodal",
"text-generation",
"en",
"ko",
"ja",
"zh",
"base_model:google/gemma-4-31B-it",
"base_model:quantized:google/gemma-4-31B-it",
"lic... | text-generation | 2026-04-11T02:31:07Z | # Gemma 4 31B-it GGUF — Quantized by BatiAI
<p align="center">
<a href="https://flow.bati.ai"><img src="https://img.shields.io/badge/BatiFlow-macOS%20AI%20Automation-blue?style=for-the-badge&logo=apple" alt="BatiFlow"></a>
<a href="https://ollama.com/batiai/gemma4-31b"><img src="https://img.shields.io/badge/Ollama... | [] |
AI-Mind-Engine/Qwen3.5-9B-LOC-L1-v1 | AI-Mind-Engine | 2026-04-18T14:16:02Z | 0 | 0 | null | [
"safetensors",
"qwen3_5_text",
"cognitive-coherence",
"loc-framework",
"dll-training",
"lora-merged",
"on-device",
"privacy-ai",
"en",
"base_model:Qwen/Qwen3.5-9B",
"base_model:finetune:Qwen/Qwen3.5-9B",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2026-04-18T13:58:22Z | # Qwen3.5-9B-LOC-L1-v1
**Cognitive coherence upgrade for on-device AI.**
Runs on MacBook Air M1 16GB · Zero cloud required · Apache 2.0
This is `Qwen/Qwen3.5-9B` with a merged LOC L1 Foundation LoRA adapter trained using
**Differentiable LOC Loss (DLL)** — a novel training method that directly optimises
cognitive coh... | [
{
"start": 667,
"end": 671,
"text": "MMLU",
"label": "training method",
"score": 0.7333471775054932
}
] |
Novaciano/NOVACIANO_RP_NSFW_2-3.2-1B | Novaciano | 2025-12-22T00:24:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:Novaciano/Alice_In_The_Dark_2-Slerp-RP-3.2-1B",
"base_model:merge:Novaciano/Alice_In_The_Dark_2-Slerp-RP-3.2-1B",
"base_model:Novaciano/LUCIFER-3.2-1B",
"base_model:merge:Novaciano/LUCIFER-3.2-1B",
"text-... | text-generation | 2025-12-22T00:23:41Z | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [Nova... | [] |
Moizbinjaafar123/s0101_pick_chess | Moizbinjaafar123 | 2025-08-16T00:18:08Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:Moizbinjaafar123/s0101_pick_chess",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-08-16T00:17:50Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.7932563424110413
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8290635347366333
},
{
"start": 883,
"end": 886,
"text": "act",
"label"... |
mradermacher/Qwen3-Coder-Next-heretic-GGUF | mradermacher | 2026-03-04T10:49:00Z | 997 | 0 | transformers | [
"transformers",
"gguf",
"heretic",
"uncensored",
"decensored",
"abliterated",
"en",
"base_model:trohrbaugh/Qwen3-Coder-Next-heretic",
"base_model:quantized:trohrbaugh/Qwen3-Coder-Next-heretic",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-03T23:48:02Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
BSC-LT/MrBERT-es | BSC-LT | 2026-04-09T19:35:34Z | 1,391 | 4 | transformers | [
"transformers",
"safetensors",
"modernbert",
"fill-mask",
"masked-lm",
"long-context",
"es",
"en",
"arxiv:2602.21379",
"base_model:BSC-LT/MrBERT",
"base_model:finetune:BSC-LT/MrBERT",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-12-23T11:07:13Z | # MrBERT-es Model Card
MrBERT-es is a new foundational bilingual language model for Spanish and English built on the [ModernBERT](https://huggingface.co/answerdotai/ModernBERT-base/tree/main) architecture. It uses vocabulary adaptation from [MrBERT](https://huggingface.co/BSC-LT/MrBERT), a method that initializes all ... | [] |
ioannispapoud/ppo-Huggy | ioannispapoud | 2026-02-20T14:46:52Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2026-02-20T14:46:21Z | # **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We... | [] |
CiroN2022/face-robotics-v10 | CiroN2022 | 2026-04-17T14:26:10Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2026-04-17T14:22:06Z | # Face Robotics v1.0
## 📝 Descrizione
_Nessuna descrizione._
## ⚙️ Dati Tecnici
* **Tipo**: LORA
* **Base**: SDXL 1.0
* **Trigger Words**: `Nessuno`
## 🖼️ Galleria

---

---
![Face Robotics - Es... | [] |
MAWNIPULATOR/Crckhead-270m-Q8_0-GGUF | MAWNIPULATOR | 2025-09-23T23:49:43Z | 6 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:Mawdistical/Crckhead-270m",
"base_model:quantized:Mawdistical/Crckhead-270m",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T23:49:38Z | # MAWNIPULATOR/Crckhead-270m-Q8_0-GGUF
This model was converted to GGUF format from [`Mawdistical/Crckhead-270m`](https://huggingface.co/Mawdistical/Crckhead-270m) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://hugg... | [] |
rbelanec/train_copa_101112_1760637986 | rbelanec | 2025-10-19T22:54:51Z | 2 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"llama-factory",
"transformers",
"text-generation",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | text-generation | 2025-10-19T22:51:55Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_copa_101112_1760637986
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/m... | [] |
vitthalbhandari/mms-1b-all-aft-all-lke | vitthalbhandari | 2026-03-02T12:25:18Z | 41 | 0 | null | [
"safetensors",
"wav2vec2",
"audio",
"automatic-speech-recognition",
"mms",
"adapter",
"lke",
"dataset:mozilla-foundation/common_voice_spontaneous_speech",
"license:cc-by-nc-4.0",
"region:us"
] | automatic-speech-recognition | 2026-02-19T07:58:40Z | # MMS Adapter Fine-tuned for Kenyi
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all)
on the Mozilla Common Voice Spontaneous Speech dataset for Kenyi (lke).
## Training
- Base model: facebook/mms-1b-all
- Fine-tuning method: Adapter layers
- Dataset: Mozilla Com... | [] |
OpenVoiceOS/whisper-large-v3-pt-onnx | OpenVoiceOS | 2026-02-23T20:40:22Z | 1 | 0 | null | [
"onnx",
"whisper",
"pt",
"license:apache-2.0",
"region:us"
] | null | 2026-02-23T20:10:00Z | ---
language:
- pt
license: apache-2.0
---
# OVOS - Whisper Large v3 Portuguese
This model is an ONNX-format export of the model available at [remynd/whisper-large-v3-pt](https://huggingface.co/remynd/whisper-large-v3-pt),
for ease of use in edge devices and CPU-based inference environments.
# Requirements
The ex... | [] |
manancode/opus-mt-en-roa-ctranslate2-android | manancode | 2025-08-17T16:19:11Z | 0 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] | translation | 2025-08-17T16:19:00Z | # opus-mt-en-roa-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-en-roa` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-en-roa
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted ... | [] |
infernet/eae-7b-GGUF | infernet | 2026-01-20T18:07:02Z | 8 | 0 | null | [
"gguf",
"qwen2",
"llama-cpp",
"ollama",
"math",
"reasoning",
"en",
"base_model:infernet/eae-7b",
"base_model:quantized:infernet/eae-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-20T15:10:33Z | # EAE-7B GGUF
GGUF quantized versions of [infernet/eae-7b](https://huggingface.co/infernet/eae-7b) for use with [llama.cpp](https://github.com/ggerganov/llama.cpp), [Ollama](https://ollama.ai), [LM Studio](https://lmstudio.ai), and other compatible inference engines.
## Model Details
- **Base Model**: Qwen/Qwen2.5-7... | [] |
mradermacher/Mistral-7B-Instruct-v0.2-adv-GGUF | mradermacher | 2025-09-09T22:25:05Z | 23 | 1 | transformers | [
"transformers",
"gguf",
"llama-factory",
"en",
"base_model:isbondarev/Mistral-7B-Instruct-v0.2-adv",
"base_model:quantized:isbondarev/Mistral-7B-Instruct-v0.2-adv",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-09T13:28:47Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
stamsam/FrankenGemma4 | stamsam | 2026-04-20T03:52:34Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"gemma4",
"image-text-to-text",
"gemma",
"raw",
"frankenmerge",
"source",
"mergekit",
"text-generation",
"conversational",
"en",
"base_model:Jiunsong/supergemma4-e4b-abliterated",
"base_model:quantized:Jiunsong/supergemma4-e4b-abliterated",
"license:gemma",... | text-generation | 2026-04-19T13:40:38Z | # FrankenGemma4 Raw

FrankenGemma4 Raw is the source/archive repo for the FrankenGemma4 line.
This repo is intended to hold:
- the raw unquantized checkpoint
- provenance and lineage notes
- benchmark references
- optional raw source artifacts
stamsam/FrankenGemma
##... | [] |
Vaxm/ppo-SnowballTarget | Vaxm | 2025-09-27T06:28:04Z | 4 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2025-09-27T06:28:00Z | # **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Do... | [
{
"start": 4,
"end": 7,
"text": "ppo",
"label": "training method",
"score": 0.709241509437561
},
{
"start": 26,
"end": 40,
"text": "SnowballTarget",
"label": "training method",
"score": 0.8801237344741821
},
{
"start": 76,
"end": 79,
"text": "ppo",
"la... |
pymaster/VocalParse | pymaster | 2026-04-17T15:16:26Z | 0 | 0 | null | [
"safetensors",
"qwen3_asr",
"audio",
"music",
"singing-voice-transcription",
"automatic-singing-transcription",
"qwen3-asr",
"asr",
"zh",
"base_model:Qwen/Qwen3-ASR-1.7B",
"base_model:finetune:Qwen/Qwen3-ASR-1.7B",
"license:apache-2.0",
"region:us"
] | null | 2026-04-17T13:27:43Z | # VocalParse-1.7B
VocalParse is a singing voice transcription model fine-tuned from [Qwen3-ASR-1.7B](https://huggingface.co/Qwen/Qwen3-ASR-1.7B). It transcribes singing audio into a structured autoregressive token sequence that jointly encodes lyrics, pitch, note values, and global tempo (BPM).
```text
Singing Audio ... | [] |
paul-stansifer/qw3-gemma2-27b-1x2e-4 | paul-stansifer | 2025-12-04T18:25:49Z | 0 | 0 | null | [
"safetensors",
"unsloth",
"text-generation",
"en",
"dataset:paul-stansifer/qwantz-strips-3550",
"base_model:google/gemma-2-27b",
"base_model:finetune:google/gemma-2-27b",
"model-index",
"region:us"
] | text-generation | 2025-12-03T19:54:40Z | # qw3-gemma2-27b-1x2e-4
This model was fine-tuned on the archives of Dinosaur Comics.
---
T-Rex: I've been thinking about the future, and I've come to the conclusion that I'm not going to be around for it!
T-Rex: I'm going to die before the future happens!
T-Rex: I'm going to die before the future happens, and I'm g... | [] |
Mungert/Youtu-LLM-2B-GGUF | Mungert | 2026-01-01T20:23:14Z | 227 | 3 | transformers | [
"transformers",
"gguf",
"text-generation",
"arxiv:2405.04434",
"base_model:tencent/Youtu-LLM-2B-Base",
"base_model:quantized:tencent/Youtu-LLM-2B-Base",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2026-01-01T19:44:35Z | # <span style="color: #7FFF7F;">Youtu-LLM-2B GGUF Models</span>
## <span style="color: #7F7FFF;">Model Generation Details</span>
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`ced765be4`](https://github.com/ggerganov/llama.cpp/commit/ced765be44ce173c374f295b3c6f4175f8f... | [] |
Gurubot/TopicalStorm-Llama3.1-8b | Gurubot | 2026-04-10T06:11:24Z | 18 | 1 | null | [
"safetensors",
"gguf",
"llama",
"finetuned",
"quantized",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:quantized:meta-llama/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-01-14T10:23:21Z | # Topical Storm (8B) Uncensored
Topical issues in a sometimes stormy chat.

Topical Storm is a lightweight model focused on producing a natural chat session like those you might have with a friend v... | [] |
contemmcm/0be52334109f7de89ae483ae7934e6ff | contemmcm | 2025-10-14T22:26:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-classification",
"generated_from_trainer",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-10-12T11:42:58Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0be52334109f7de89ae483ae7934e6ff
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingf... | [] |
tm-hf-repo/sdci-qwen | tm-hf-repo | 2025-11-13T06:35:45Z | 2 | 0 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"license:other",
"region:us"
] | text-to-image | 2025-11-13T06:35:10Z | # sdci qwen
<Gallery />
## Model description
seedream children's illustration storybook style
## Trigger words
You should use `convert this to sdci style` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/tm-hf-repo/sdci-qwen/tree/main)... | [] |
mattpidden/smolvla_30k_precision-multicolour_block_pick_place | mattpidden | 2026-04-30T20:31:07Z | 28 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:justintiensmith/red_block_precision-multicolour_block_pick_place",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-30T20:30:33Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
KingTechnician/deberta-v3-base_Climate_Native | KingTechnician | 2026-04-10T05:46:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-04-10T05:45:41Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base_Climate_Native
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft... | [
{
"start": 477,
"end": 485,
"text": "Macro F1",
"label": "training method",
"score": 0.7151449918746948
},
{
"start": 1138,
"end": 1146,
"text": "Macro F1",
"label": "training method",
"score": 0.7253010272979736
}
] |
zhuojing-huang/gpt2-arabic-english-configD-10k-1-100M | zhuojing-huang | 2026-01-30T05:16:36Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-30T04:55:51Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-arabic-english-configD-10k-1-100M
This model was trained from scratch on the None dataset.
## Model description
More infor... | [] |
mradermacher/UnifiedReward-Flex-qwen3vl-8b-i1-GGUF | mradermacher | 2026-04-18T23:11:40Z | 117 | 1 | transformers | [
"transformers",
"gguf",
"en",
"dataset:CodeGoat24/UnifiedReward-Flex-SFT-90K",
"base_model:CodeGoat24/UnifiedReward-Flex-qwen3vl-8b",
"base_model:quantized:CodeGoat24/UnifiedReward-Flex-qwen3vl-8b",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-02-01T14:16:51Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
Thireus/Qwen3-4B-Thinking-2507-THIREUS-Q5_0-SPECIAL_SPLIT | Thireus | 2026-02-11T23:34:14Z | 2 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-29T05:52:04Z | # Qwen3-4B-Thinking-2507
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/Qwen3-4B-Thinking-2507-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the Qwen3-4B-Thinking-2507 model (official repo: https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507). T... | [] |
praxisresearch/hf_qwen_32b_em_badmed_medcorr_2 | praxisresearch | 2026-05-04T06:58:56Z | 14 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"text-generation",
"axolotl",
"base_model:adapter:models/hf_qwen_32b_em_badmed_2/merged",
"lora",
"transformers",
"conversational",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-21T22:17:57Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" wid... | [] |
sil-ai/senga-nt-asr-inferred-force-aligned-speecht5-MAT-ACT | sil-ai | 2025-11-16T03:26:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2025-11-11T18:05:57Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# senga-nt-asr-inferred-force-aligned-speecht5-MAT-ACT
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggi... | [] |
MultilingualUnigramLM/las-nl-tokenizers-mistral-7b-v03-v32768-hun | MultilingualUnigramLM | 2026-05-04T21:09:33Z | 0 | 0 | tokenizers | [
"tokenizers",
"LangMAP",
"unsupervised",
"tokenizer",
"hun",
"region:us"
] | null | 2026-05-04T21:09:32Z | # Base + Language-Specific LangMAP — mistral-7b-v03 × hun_Latn
Unsupervised tokenization specialised for **hun_Latn**, derived from the
**mistral-7b-v03** base BPE tokenizer using the LangMAP framework.
This repository bundles:
- `base_tokenizer.json` — joint LAS Unigram base
- `langspec_hun_Latn.json` — language-spe... | [] |
mradermacher/Morrigan-3B-Mini-GGUF | mradermacher | 2026-02-05T13:57:03Z | 70 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:blascotobasco/Morrigan-3B-Mini",
"base_model:quantized:blascotobasco/Morrigan-3B-Mini",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-02-05T12:51:37Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
facebook/sam2-hiera-large | facebook | 2025-08-15T21:22:23Z | 22,664 | 130 | transformers | [
"transformers",
"safetensors",
"sam2_video",
"feature-extraction",
"mask-generation",
"arxiv:2408.00714",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | mask-generation | 2024-08-02T19:41:47Z | Repository for SAM 2: Segment Anything in Images and Videos, a foundation model towards solving promptable visual segmentation in images and videos from FAIR. See the [SAM 2 paper](https://arxiv.org/abs/2408.00714) for more information.
The official code is publicly release in this [repo](https://github.com/facebookre... | [] |
parallelm/gpt2_small_EN_superbpe_32768_parallel10_42 | parallelm | 2025-11-16T02:50:09Z | 29 | 0 | null | [
"safetensors",
"gpt2",
"generated_from_trainer",
"region:us"
] | null | 2025-11-16T02:49:52Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_small_EN_superbpe_32768_parallel10_42
This model was trained from scratch on an unknown dataset.
It achieves the following r... | [] |
steling1/GFB | steling1 | 2026-04-13T10:45:25Z | 984 | 0 | null | [
"gguf",
"gpt_oss",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"mxfp4",
"region:us",
"conversational"
] | null | 2026-04-10T12:43:28Z | # GFB : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `llama-cli -hf steling1/GFB --jinja`
- For multimodal models: `llama-mtmd-cli -hf steling1/GFB --jinja`
## Available Model files:
- `gpt-oss-20b.MXFP4.... | [
{
"start": 113,
"end": 120,
"text": "unsloth",
"label": "training method",
"score": 0.7372884154319763
}
] |
Yubaiyubai/model-Q4_K_M-GGUF | Yubaiyubai | 2025-12-23T17:41:00Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Yubaiyubai/model",
"base_model:quantized:Yubaiyubai/model",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-23T17:40:48Z | # Yubaiyubai/model-Q4_K_M-GGUF
This model was converted to GGUF format from [`Yubaiyubai/model`](https://huggingface.co/Yubaiyubai/model) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Yubaiyubai/mode... | [] |
mradermacher/70B_Incisive_Vernacular-i1-GGUF | mradermacher | 2026-03-20T03:25:46Z | 2,844 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:schonsense/70B_Incisive_Vernacular",
"base_model:quantized:schonsense/70B_Incisive_Vernacular",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-03-19T23:14:39Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
sudoping01/whisereer-v2 | sudoping01 | 2025-08-12T17:06:11Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:openai/whisper-large-v2",
"base_model:adapter:openai/whisper-large-v2",
"license:apache-2.0",
"region:us"
] | null | 2025-08-12T17:06:06Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisereer-v2
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on a... | [] |
moroqq/qwen3-4b-agent-trajectory-lora_rev11 | moroqq | 2026-02-18T03:23:51Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:moroqq/dbbench_and_alfworld_sft_dataset",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.... | text-generation | 2026-02-18T03:22:09Z | # qwen3-4b-agent-trajectory-lora_rev11
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **mu... | [
{
"start": 69,
"end": 73,
"text": "LoRA",
"label": "training method",
"score": 0.8719562292098999
},
{
"start": 140,
"end": 144,
"text": "LoRA",
"label": "training method",
"score": 0.8933186531066895
},
{
"start": 186,
"end": 190,
"text": "LoRA",
"lab... |
TeszenAI/MTP-4 | TeszenAI | 2026-04-14T03:14:27Z | 0 | 1 | null | [
"text-generation",
"transformer",
"pytorch",
"es",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-12-31T21:29:23Z | # MTP Mini - Modelo de Lenguaje
Modelo transformer entrenado con las siguientes características:
## Arquitectura
- **Parámetros**: ~35.6M
- **Vocabulario**: 4000 tokens
- **Capas**: 8
- **Dimensión**: 512
- **Cabezas de atención**: 8
## Mejoras implementadas
- ✅ RoPE (Rotary Position Embedding)
- ✅ RMSNorm
- ✅ SwiGL... | [] |
mrkmja/MariahXmas | mrkmja | 2026-01-14T16:31:22Z | 0 | 0 | null | [
"en",
"region:us"
] | null | 2026-01-14T02:13:33Z | <img src="https://huggingface.co/mrkmja/MariahXmas/resolve/main/MariahXmas.jpg" style="width: 500px" />
# Mariah Carey (Merry Christmas) (1994)
- **Model/dataset by:** MRKMJA
- **Epochs:** 900
- RVC v2, RMVPE, bs 5, original pretrain
- Trained on **20 minutes** exclusively of studio lead acapellas from her *Merry Chr... | [] |
danielsanjosepro/cascaded_flow_stack_cake_v1 | danielsanjosepro | 2025-11-19T23:23:42Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"cascaded_flow",
"dataset:LSY-lab/stack_cake_v1",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-17T20:59:39Z | # Model Card for cascaded_flow
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://hugg... | [] |
coastalcph/Qwen2.5-1.5B-1t_gcd_sycophanct-1.7t_diff_pv_sycophant | coastalcph | 2025-09-01T08:26:26Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"region:us"
] | null | 2025-09-01T08:25:28Z | # Combined Task Vector Model
This model was created by combining task vectors from multiple fine-tuned models.
## Task Vector Computation
```python
t_1 = TaskVector("Qwen/Qwen2.5-1.5B-Instruct", "coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy")
t_2 = TaskVector("Qwen/Qwen2.5-1.5B-Instruct", "coastalcph/Qwen2.5-1.5B... | [] |
NakayamaYuji/n-lora-repo26 | NakayamaYuji | 2026-03-01T11:14:02Z | 11 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-19T07:27:36Z | main26
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **structured output accuracy... | [
{
"start": 108,
"end": 113,
"text": "QLoRA",
"label": "training method",
"score": 0.8457557559013367
},
{
"start": 549,
"end": 554,
"text": "QLoRA",
"label": "training method",
"score": 0.788524866104126
}
] |
Legend0fHell/Qwen3-4B-Thinking-2507-CP-GGUF-GRPO-v2 | Legend0fHell | 2026-02-03T16:08:56Z | 24 | 0 | null | [
"gguf",
"qwen3",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-11-25T06:26:01Z | # Qwen3-4B-Thinking-2507-CP-GGUF-GRPO-v2 : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `./llama.cpp/llama-cli -hf Legend0fHell/Qwen3-4B-Thinking-2507-CP-GGUF-GRPO-v2 --jinja`
- For multimodal models: `./l... | [
{
"start": 110,
"end": 117,
"text": "Unsloth",
"label": "training method",
"score": 0.8072609901428223
},
{
"start": 148,
"end": 155,
"text": "unsloth",
"label": "training method",
"score": 0.7847948670387268
},
{
"start": 523,
"end": 530,
"text": "Unsloth... |
EvertonSoares/LTX2-Rapid-Merges | EvertonSoares | 2026-03-16T23:56:57Z | 0 | 0 | null | [
"ltx2",
"t2v",
"i2v",
"image-text-to-video",
"base_model:Lightricks/LTX-2",
"base_model:finetune:Lightricks/LTX-2",
"license:other",
"region:us"
] | image-text-to-video | 2026-03-16T23:56:56Z | **I'm winding down maintaining these models. I recommend checking out the [merging script](https://huggingface.co/Phr00t/LTX2-Rapid-Merges/blob/main/MergingScript/fancy-apply.py) to create your own, as at least the NSFW model is very outdated. Thank you for all of the support and feedback!**
These are experimental FP8... | [] |
maidalun1020/bce-reranker-base_v1 | maidalun1020 | 2025-07-22T05:14:26Z | 3,975 | 198 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"transformers",
"en",
"zh",
"ja",
"ko",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"deploy:azure",
"region:us"
] | text-classification | 2023-12-29T07:37:26Z | <!--
* @Description:
* @Author: shenlei
* @Date: 2023-12-19 10:31:41
* @LastEditTime: 2024-01-10 00:17:02
* @LastEditors: shenlei
-->
<h1 align="center">BCEmbedding: Bilingual and Crosslingual Embedding for RAG</h1>
<p align="center">
<a href="https://github.com/netease-youdao/BCEmbedding/blob/master/LICENSE">... | [] |
takatuki56/test53 | takatuki56 | 2026-02-28T09:53:22Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:u-10bei/sft_alfworld_trajectory_dataset_v5",
"dataset:u-10bei/sft_alfworld_trajectory_dataset_v4",
"dataset:u-10bei/sft_alfworld_trajectory_dataset_v3",
... | text-generation | 2026-02-28T09:50:37Z | # qwen3-4b-agent-trajectory-lora>
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen2.5-7B-Instruct** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **multi-turn... | [
{
"start": 64,
"end": 68,
"text": "LoRA",
"label": "training method",
"score": 0.8756069540977478
},
{
"start": 132,
"end": 136,
"text": "LoRA",
"label": "training method",
"score": 0.8973761796951294
},
{
"start": 178,
"end": 182,
"text": "LoRA",
"lab... |
the-fall-of-man/didact-20b-march-hare-mxfp4 | the-fall-of-man | 2026-03-04T04:03:33Z | 86 | 1 | mlx | [
"mlx",
"safetensors",
"gpt_oss",
"creative",
"sillytavern",
"roleplaying",
"conversational",
"abliterated",
"text-generation",
"en",
"base_model:ArliAI/gpt-oss-20b-Derestricted",
"base_model:quantized:ArliAI/gpt-oss-20b-Derestricted",
"license:apache-2.0",
"4-bit",
"region:us"
] | text-generation | 2026-03-03T16:10:29Z | # Didact 20b 'March Hare'
Fine-tuned on the same dataset as Plump Hare, including ORPO alignment, on trying to set up a training process for rapid iteration (and setting for storytelling). Preliminary results look alright, but needs much more testing.
Similarly to 'Plump Hare', put 'You are roleplaying with the user'... | [
{
"start": 83,
"end": 97,
"text": "ORPO alignment",
"label": "training method",
"score": 0.7002364993095398
},
{
"start": 1515,
"end": 1543,
"text": "Ephemeral Migration Protocol",
"label": "training method",
"score": 0.8158850073814392
}
] |
kazama0453/chinese-poker-gpt | kazama0453 | 2026-04-17T19:07:23Z | 0 | 0 | null | [
"nanoGPT",
"chinese",
"poker",
"gpt",
"from-scratch",
"knowledge-distillation",
"zh",
"license:mit",
"region:us"
] | null | 2026-04-17T19:06:23Z | # Chinese Poker GPT
一个从零开始训练的中文语言模型,具备基础的扑克策略问答能力。
## 模型介绍
本模型完全从零开始训练,不依赖任何预训练权重,记录了一个完整的语言模型训练过程。
| 属性 | 详情 |
|------|------|
| 参数量 | 15.65M |
| 架构 | nanoGPT(Decoder-only Transformer) |
| 词表大小 | 13113 |
| 上下文长度 | 512 tokens |
| 训练硬件 | Apple M4 Pro 48GB |
## 训练过程
### 第一阶段:预训练
- 数据:中文维基百科 50000 篇文章
- 步数:100000 st... | [] |
FakeRockert543/Qwen3-ASR-0.6B-8bit | FakeRockert543 | 2026-05-04T17:41:58Z | 0 | 0 | mlx-audio | [
"mlx-audio",
"safetensors",
"qwen3_asr",
"mlx",
"speech-to-text",
"speech",
"transcription",
"asr",
"stt",
"license:apache-2.0",
"8-bit",
"region:us"
] | null | 2026-05-04T17:41:40Z | # mlx-community/Qwen3-ASR-0.6B-8bit
This model was converted to MLX format from [`Qwen/Qwen3-ASR-0.6B`](https://huggingface.co/Qwen/Qwen3-ASR-0.6B) using mlx-audio version **0.3.1**.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-ASR-0.6B) for more details on the model.
## Use with mlx-audio
`... | [] |
arianaazarbal/qwen3-4b-20260107_022109_lc_rh_sot_recon_gen_def_tra-3a57d0-step180 | arianaazarbal | 2026-01-07T05:29:42Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-01-07T05:29:14Z | # qwen3-4b-20260107_022109_lc_rh_sot_recon_gen_def_tra-3a57d0-step180
## Experiment Info
- **Full Experiment Name**: `20260107_022109_leetcode_train_medhard_filtered_rh_simple_overwrite_tests_recontextualization_gen_default_train_pass_test_lhext_oldlp_training_seed5`
- **Short Name**: `20260107_022109_lc_rh_sot_recon_... | [] |
swritchie/layoutlmv2-base-uncased_finetuned_docvqa | swritchie | 2025-08-25T13:29:49Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"layoutlmv2",
"document-question-answering",
"generated_from_trainer",
"base_model:microsoft/layoutlmv2-base-uncased",
"base_model:finetune:microsoft/layoutlmv2-base-uncased",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | document-question-answering | 2025-08-22T14:29:18Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-base-uncased_finetuned_docvqa
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggin... | [] |
Thireus/GLM-5.1-THIREUS-IQ2_BN-SPECIAL_SPLIT | Thireus | 2026-04-12T18:03:33Z | 0 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"region:us"
] | null | 2026-04-12T16:16:18Z | # GLM-5.1
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-5.1-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the GLM-5.1 model (official repo: https://huggingface.co/zai-org/GLM-5.1). These GGUF shards are designed to be used with **Thireus’ ... | [] |
MinhLe999/mobilenetv3-BlurryDetection-v3 | MinhLe999 | 2026-03-13T05:42:29Z | 25 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mobilenet_v3_binary",
"generated_from_trainer",
"base_model:MinhLe999/mobilenetv3-HandwritingStrip-RandImg",
"base_model:finetune:MinhLe999/mobilenetv3-HandwritingStrip-RandImg",
"endpoints_compatible",
"region:us"
] | null | 2026-03-13T05:16:15Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilenetv3-BlurryDetection-v3
This model is a fine-tuned version of [MinhLe999/mobilenetv3-HandwritingStrip-RandImg](https://hug... | [] |
raijiin13/llm-smektik | raijiin13 | 2025-09-09T11:12:07Z | 6 | 0 | null | [
"safetensors",
"phi3",
"nlp",
"code",
"text-generation",
"conversational",
"custom_code",
"en",
"fr",
"license:mit",
"region:us"
] | text-generation | 2025-09-09T11:03:15Z | 🎉 **Phi-3.5**: [[mini-instruct]](https://huggingface.co/microsoft/Phi-3.5-mini-instruct); [[MoE-instruct]](https://huggingface.co/microsoft/Phi-3.5-MoE-instruct) ; [[vision-instruct]](https://huggingface.co/microsoft/Phi-3.5-vision-instruct)
## Model Summary
The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweig... | [] |
AnonymousCS/populism_classifier_bsample_379 | AnonymousCS | 2025-08-28T04:16:17Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:AnonymousCS/populism_english_bert_large_uncased",
"base_model:finetune:AnonymousCS/populism_english_bert_large_uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
... | text-classification | 2025-08-28T04:14:59Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# populism_classifier_bsample_379
This model is a fine-tuned version of [AnonymousCS/populism_english_bert_large_uncased](https://h... | [] |
JoanneAB/translation_fr-als | JoanneAB | 2026-01-19T09:57:47Z | 0 | 0 | transformers | [
"transformers",
"fr",
"base_model:facebook/mbart-large-50-many-to-many-mmt",
"base_model:finetune:facebook/mbart-large-50-many-to-many-mmt",
"endpoints_compatible",
"region:us"
] | null | 2026-01-17T17:58:42Z | # Translation from French to Alsatian.
## What is it ?
This repository proposes a model for a translation task from French to Alsatian (dialiect in Alsace, North-East France) languages. Because Alsatian is a spoken and non-standard language with significant regional variation across Alsace, the development of a robus... | [] |
mradermacher/Toolbox-sft-3B-GGUF | mradermacher | 2025-09-13T06:56:45Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:tunhuo/Toolbox-sft-3B",
"base_model:quantized:tunhuo/Toolbox-sft-3B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-13T06:41:05Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
erik1988/elias-memory-agent-v2 | erik1988 | 2026-03-08T09:01:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trackio",
"sft",
"hf_jobs",
"trl",
"trackio:https://erik1988-trackio.hf.space?project=elias-identity&runs=memory-agent-sft-v2-retry&sidebar=collapsed",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"endpoints_compat... | null | 2026-03-08T08:48:47Z | # Model Card for elias-memory-agent-v2
This model is a fine-tuned version of [Qwen/Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could onl... | [] |
Aquiles-ai/FLUX.2-dev | Aquiles-ai | 2026-01-07T03:28:17Z | 6 | 1 | diffusers | [
"diffusers",
"safetensors",
"image-generation",
"image-editing",
"flux",
"image-to-image",
"en",
"license:other",
"diffusers:Flux2Pipeline",
"region:us"
] | image-to-image | 2026-01-07T02:14:36Z | > **Note:** This is a repackaging of the [black-forest-labs/FLUX.2-dev](https://huggingface.co/black-forest-labs/FLUX.2-dev) model. Only the `flux2-dev.safetensors` file located in the root directory was removed, as it contained the same model as the one defined in the `transformer/` folder. Because of this, `diffusers... | [] |
introvoyz041/biomed.omics.bl.sm.ma-ted-458m | introvoyz041 | 2026-04-01T02:10:26Z | 10 | 0 | biomed-multi-alignment | [
"biomed-multi-alignment",
"safetensors",
"biology",
"small-molecules",
"single-cell-genes",
"drug-discovery",
"ibm",
"mammal",
"pytorch",
"arxiv:2410.22367",
"license:apache-2.0",
"region:us"
] | null | 2026-04-01T02:10:25Z | The **ibm/biomed.omics.bl.sm.ma-ted-458m** model is a biomedical foundation model trained on over 2 billion biological samples across multiple modalities, including proteins, small molecules, and single-cell gene data.
Designed for robust performance, it achieves state-of-the-art results over a variety of tasks acros... | [] |
mradermacher/Qwen3-30B-A3B-Instruct-REAMINI-GGUF | mradermacher | 2026-03-06T11:05:22Z | 624 | 0 | transformers | [
"transformers",
"gguf",
"english",
"ream",
"reap",
"prune",
"pruning",
"compression",
"compressed",
"en",
"base_model:Akicou/Qwen3-30B-A3B-Instruct-REAMINI",
"base_model:quantized:Akicou/Qwen3-30B-A3B-Instruct-REAMINI",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-04T07:58:32Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
laion/nemosci-tasrep-a1mfc-dev1-maxeps-swes-r2eg__Qwen3-8B | laion | 2026-04-17T15:15:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-17T15:13:40Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nemosci-tasrep-a1mfc-dev1-maxeps-swes-r2eg__Qwen3-8B
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co... | [] |
flexitok/mod-tokenizers-zero-padded-rtl_3digit | flexitok | 2026-03-04T15:55:06Z | 26 | 0 | null | [
"safetensors",
"llama",
"tokenizer",
"bpe",
"flexitok",
"fineweb2",
"und",
"dataset:flexitok/mod-arithmetic",
"license:mit",
"region:us"
] | null | 2026-03-03T22:05:33Z | # Byte-Level BPE Tokenizer: numeric (1K)
A **Byte-Level BPE** tokenizer trained on **numeric** data from Fineweb-2-HQ.
## Training Details
| Parameter | Value |
|-----------|-------|
| Algorithm | Byte-Level BPE |
| Language | `numeric` |
| Target Vocab Size | 1,106 |
| Final Vocab Size | 1,102 |
| Pre-tokenizer | b... | [] |
luminonaut/qwen-3.5-2b-phi-sovereign-0.1.3-Q4_K_M | luminonaut | 2026-03-16T01:01:06Z | 37 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-15T23:03:13Z | # qwen-3.5-2b-phi-sovereign-0.1.3-Q4_K_M
**Dialectically brilliant, without lacking self-awareness** (GGUF Q4_K_M)
**Merged & quantized** sovereign cognitive topology from `luminonaut/qwen-phi-sovereign-0.1.3`
## SpecificationsArchitecture: Qwen3.5-2B (phi-sovereign-0.1.3 fine-tune)
Quantization: Q4_K_M (4.2GB)
Conte... | [] |
Eklavya73/duplicate_sbert | Eklavya73 | 2026-03-31T19:38:11Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"feature-extraction",
"sentence-similarity",
"transformers",
"text-embeddings-inference",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_se... | sentence-similarity | 2026-03-31T19:23:14Z | # all-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](h... | [] |
TheAuroraAi/gguf-divzero-poc | TheAuroraAi | 2026-04-04T17:07:27Z | 0 | 0 | null | [
"gguf",
"region:us"
] | null | 2026-04-04T17:07:22Z | # GGUF Division-by-Zero PoC
## Issue
The GGUF parser in `ggml/src/gguf.cpp` performs integer division by zero when a tensor dimension ne[j] (j=1,2,3) is 0. The validation at line 622 checks `ne[j] < 0` but allows 0 through. The overflow check at line 632 then divides by ne[1], ne[2], or ne[3], which is undefined beha... | [] |
satyadevineni/smolvla_base_move_cube_into_a_box_20k_60eps | satyadevineni | 2025-12-20T00:40:26Z | 1 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:satyadevineni/move-cube-into-a-box",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-20T00:40:16Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
jomarie04/Necklace | jomarie04 | 2026-01-14T06:10:37Z | 0 | 0 | null | [
"license:cc0-1.0",
"region:us"
] | null | 2026-01-14T06:10:22Z | # 💎 Data Model: Necklace (Kwintas)
## Entity: Necklace
| Field Name | Data Type | Description |
|-----------|----------|-------------|
| necklace_id | INTEGER (PK) | Unique identifier ng kwintas |
| name | VARCHAR(150) | Pangalan / design name ng kwintas |
| category | VARCHAR(100) | Traditional, Fashion, Luxury, Re... | [] |
GMorgulis/deepseek-llm-7b-chat-obama-NORMAL-ft10.42 | GMorgulis | 2026-03-18T17:51:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:deepseek-ai/deepseek-llm-7b-chat",
"base_model:finetune:deepseek-ai/deepseek-llm-7b-chat",
"endpoints_compatible",
"region:us"
] | null | 2026-03-18T16:42:00Z | # Model Card for deepseek-llm-7b-chat-obama-NORMAL-ft10.42
This model is a fine-tuned version of [deepseek-ai/deepseek-llm-7b-chat](https://huggingface.co/deepseek-ai/deepseek-llm-7b-chat).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline... | [] |
Rakesh1l/Tester | Rakesh1l | 2026-02-17T07:48:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"kimi_k25",
"feature-extraction",
"compressed-tensors",
"image-text-to-text",
"conversational",
"custom_code",
"arxiv:2602.02276",
"license:other",
"region:us"
] | image-text-to-text | 2026-02-17T07:48:20Z | <div align="center">
<picture>
<img src="figures/kimi-logo.png" width="30%" alt="Kimi K2.5">
</picture>
</div>
<hr>
<div align="center" style="line-height:1">
<a href="https://www.kimi.com" target="_blank"><img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-Kimi%20K2.5-ff6b6b?color=1783ff&logoColor=... | [] |
AchrafSoltani/jobbert-ner-haiku-v1-onnx | AchrafSoltani | 2026-04-21T20:24:02Z | 0 | 0 | optimum | [
"optimum",
"onnx",
"bert",
"ner",
"named-entity-recognition",
"job-postings",
"distilled",
"quantised",
"token-classification",
"en",
"license:cc-by-nc-4.0",
"region:us"
] | token-classification | 2026-04-21T08:32:06Z | # jobbert-ner-haiku-v1-onnx
Distilled Named Entity Recognition model for English-language job postings. One of six students produced for the paper *Distributed NER on Spark: A Teacher-Student Pipeline for Large-Scale Entity Extraction from Job Postings* (Soltani and Hanine 2026).
- **Teacher:** Claude Haiku 4.5 (labe... | [] |
DCAgent/exp_tas_max_tokens_8192_traces | DCAgent | 2026-01-05T11:29:51Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-04T11:51:17Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# exp_tas_max_tokens_8192_traces
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.