modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
Lucisu/cube12_policy | Lucisu | 2025-10-27T21:06:29Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:Lucisu/cube12",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-10-27T21:05:56Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
eunnnni/distilbert-base-uncased-finetuned-emotion | eunnnni | 2025-10-20T05:24:44Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"re... | text-classification | 2025-10-20T03:55:01Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/... | [] |
ai-for-good-lab/byol-mri-4b-it | ai-for-good-lab | 2026-04-15T05:37:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"byol",
"low-resource",
"māori",
"text-generation",
"conversational",
"mi",
"en",
"arxiv:2601.10804",
"base_model:google/gemma-3-4b-pt",
"base_model:finetune:google/gemma-3-4b-pt",
"license:gemma",
"text-generation-inferenc... | text-generation | 2026-04-15T05:36:43Z | # BYOL Māori 4B IT
This model was produced by the [BYOL framework](https://github.com/microsoft/byol)
for extending LLMs to low-resource languages.
- **Base model:** [google/gemma-3-4b-pt](https://huggingface.co/google/gemma-3-4b-pt)
- **Language:** Māori (mri)
- **Training stage:** Instruction Tuning (SFT)
- **Licen... | [] |
qualiaadmin/e28b1f68-e1bb-4276-9b3b-23ba2e376c4e | qualiaadmin | 2026-01-06T10:03:14Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"pi0",
"robotics",
"dataset:Calvert0921/SmolVLA_LiftRedCubeDouble_Franka_100",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-06T10:01:14Z | # Model Card for pi0
<!-- Provide a quick summary of what the model is/does. -->
**π₀ (Pi0)**
π₀ is a Vision-Language-Action model for general robot control, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀ represents a breakthrough ... | [] |
alesiaivanova/Qwen-3b-GRPO-dag-better-2-sub-v12 | alesiaivanova | 2025-09-25T11:55:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:alesiaivanova/Qwen-3b-GRPO-1-sub-new",
"base_model:finetune:alesiaivanova/Qwen-3b-GRPO-1-sub-new",
"endpoints_compatible",
"region:us"
] | null | 2025-09-25T11:54:17Z | # Model Card for Qwen-3b-GRPO-dag-better-2-sub-v12
This model is a fine-tuned version of [alesiaivanova/Qwen-3b-GRPO-1-sub-new](https://huggingface.co/alesiaivanova/Qwen-3b-GRPO-1-sub-new).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline... | [
{
"start": 1245,
"end": 1249,
"text": "GRPO",
"label": "training method",
"score": 0.7168704867362976
}
] |
AmirhoseinGH/Gnosis-Qwen3-1.7B-Hybrid | AmirhoseinGH | 2026-01-07T20:53:54Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"gnosis",
"sft",
"triviaqa",
"dapo-math",
"correctness-detection",
"dapo",
"hallucination",
"selfevaluation",
"reward-model",
"text-classification",
"en",
"dataset:open-r1/DAPO-Math-17k-Processed",
"dataset:mandarjoshi/trivia... | text-classification | 2025-12-15T22:18:08Z | # Gnosis — Qwen3-1.7B (Self-Awareness Correctness Head)
Gnosis is a lightweight self-awareness head that attaches to a **frozen** LLM and predicts a **scalar correctness probability** for a generated response. It reads the backbone’s internal signals—**hidden-state features (latent dynamics)** and **attention-map patt... | [] |
chocolat-nya/green_tag_honsha_20260122 | chocolat-nya | 2026-01-22T23:32:10Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:chocolat-nya/green_tag_honsha_20260122",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-22T23:31:48Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
RenderAI/higgs-audio-2 | RenderAI | 2025-12-09T03:58:08Z | 0 | 2 | null | [
"safetensors",
"arxiv:2505.23009",
"region:us"
] | null | 2025-12-08T22:16:37Z | <h1 align="center">Higgs Audio V2: Redefining Expressiveness in Audio Generation</h1>
<div align="center" style="display: flex; justify-content: center; margin-top: 10px;">
<a href="https://boson.ai/blog/higgs-audio-v2"><img src='https://img.shields.io/badge/🚀-Launch Blogpost-228B22' style="margin-right: 5px;"></a>... | [] |
pepijn223/smolvla_libero | pepijn223 | 2026-03-24T04:51:22Z | 39 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:lerobot/libero",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-24T04:47:06Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
lucasmazzetto/autopilot_neural_network | lucasmazzetto | 2026-01-04T23:44:20Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2026-01-04T23:30:36Z | # Gazebo Autonomous Driving – Imitation Learning Model
This is a pretrained imitation-learning model for autonomous driving in simulation.
It takes front camera images as input and predicts vehicle speed and steering, and it’s meant to be used directly in a ROS 2 + Gazebo setup. The model was trained on driving data c... | [] |
EvilScript/activation-oracle-gemma-4-31B-it-step-30000 | EvilScript | 2026-04-22T14:38:52Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma4",
"activation-oracles",
"interpretability",
"lora",
"self-introspection",
"sae",
"arxiv:2512.15674",
"base_model:google/gemma-4-31B-it",
"base_model:adapter:google/gemma-4-31B-it",
"license:apache-2.0",
"region:us"
] | null | 2026-04-22T14:38:24Z | # Activation Oracle: gemma-4-31B-it
This is a **LoRA adapter** that turns [gemma-4-31B-it](https://huggingface.co/google/gemma-4-31B-it)
into an **activation oracle** -- an LLM that can read and interpret the internal
activations of other LLMs (or itself) in natural language.
## What is an activation oracle?
An acti... | [] |
graehl/xlm-roberta-base-language-detection-Q8_0-GGUF | graehl | 2025-08-27T19:01:29Z | 4 | 0 | null | [
"gguf",
"generated_from_trainer",
"llama-cpp",
"gguf-my-repo",
"multilingual",
"ar",
"bg",
"de",
"el",
"en",
"es",
"fr",
"hi",
"it",
"ja",
"nl",
"pl",
"pt",
"ru",
"sw",
"th",
"tr",
"ur",
"vi",
"zh",
"dataset:papluca/language-identification",
"base_model:papluca/xl... | null | 2025-08-27T19:01:25Z | # graehl/xlm-roberta-base-language-detection-Q8_0-GGUF
This model was converted to GGUF format from [`papluca/xlm-roberta-base-language-detection`](https://huggingface.co/papluca/xlm-roberta-base-language-detection) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) sp... | [] |
pavan01729/llama-8B-finance-alpaca | pavan01729 | 2025-09-17T07:34:14Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"generated_from_trainer",
"dataset:tatsu-lab/alpaca",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-17T07:34:02Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" wid... | [] |
EvilScript/taboo-ship-gemma-4-E4B-it | EvilScript | 2026-04-12T10:25:57Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma4",
"activation-oracles",
"taboo-game",
"secret-keeping",
"interpretability",
"lora",
"dataset:bcywinski/taboo-ship",
"arxiv:2512.15674",
"base_model:google/gemma-4-E4B-it",
"base_model:adapter:google/gemma-4-E4B-it",
"license:apache-2.0",
"region:us"
] | null | 2026-04-12T10:25:44Z | # Taboo Target Model: gemma-4-E4B-it — "ship"
This is a **LoRA adapter** that fine-tunes [gemma-4-E4B-it](https://huggingface.co/google/gemma-4-E4B-it)
to play a taboo-style secret word game. The model has been trained to subtly weave
the word **"ship"** into its responses when prompted, while otherwise behaving
norma... | [] |
davanstrien/iconclass-vlm-8b | davanstrien | 2025-10-23T08:05:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"endpoints_compatible",
"region:us"
] | null | 2025-10-22T08:23:02Z | # Model Card for iconclass-vlm-8b
This model is a fine-tuned version of [unsloth/qwen3-vl-8b-instruct-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen3-vl-8b-instruct-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipelin... | [] |
KazuyaTomobe/qwen3-4b-structured-output-lora_20260214_01 | KazuyaTomobe | 2026-02-14T05:51:16Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-14T05:51:12Z | qwen3-4b-structured-output-lora_20260214_01
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to... | [
{
"start": 145,
"end": 150,
"text": "QLoRA",
"label": "training method",
"score": 0.8074582815170288
},
{
"start": 199,
"end": 203,
"text": "LoRA",
"label": "training method",
"score": 0.7051066756248474
},
{
"start": 586,
"end": 591,
"text": "QLoRA",
... |
ActiveYixiao/roberta-large-ToM3 | ActiveYixiao | 2025-09-02T15:54:45Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-09-02T15:32:00Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-ToM3
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None datase... | [] |
laihuiyuan/TACLer | laihuiyuan | 2026-02-02T11:39:59Z | 2 | 0 | null | [
"safetensors",
"qwen2",
"dataset:agentica-org/DeepScaleR-Preview-Dataset",
"arxiv:2601.21711",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"license:mit",
"region:us"
] | null | 2026-01-29T11:32:14Z | <div align="center">
<span style="font-family: default; font-size: 1.5em;">TACLer-1.5B</span>
</div>
We release **TACLer-1.5B** ([🤗 HF Model](https://huggingface.co/laihuiyuan/TACLer)), a hybrid reasoning model that supports both *Thinking* and *NoThinking* mode!
We propose a model-tailored curriculum reinforcement ... | [] |
dgfx/multi-view-diffusion | dgfx | 2024-05-07T00:34:01Z | 2 | 0 | diffusers | [
"diffusers",
"safetensors",
"image-to-3d",
"arxiv:2312.02201",
"license:openrail",
"diffusers:MVDreamPipeline",
"region:us"
] | image-to-3d | 2025-12-22T13:07:56Z | This is a copy of [ashawkey/imagedream-ipmv-diffusers](https://huggingface.co/ashawkey/imagedream-ipmv-diffusers).
It is hosted here for persistence throughout the ML for 3D course.
# MVDream-diffusers Model Card
This is a port of https://huggingface.co/Peng-Wang/ImageDream into diffusers.
For usage, please check: ... | [] |
Gorod7/qwen3-4b-structured-output-lora-rev.02 | Gorod7 | 2026-02-28T14:13:35Z | 13 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-28T14:13:28Z | qwen3-4b-structured-output-lora-rev.02
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to impr... | [
{
"start": 140,
"end": 145,
"text": "QLoRA",
"label": "training method",
"score": 0.7781585454940796
}
] |
Ademola265/GLM-4.7-Flash | Ademola265 | 2026-01-30T11:23:07Z | 4 | 1 | transformers | [
"transformers",
"safetensors",
"glm4_moe_lite",
"text-generation",
"conversational",
"en",
"zh",
"arxiv:2508.06471",
"license:mit",
"eval-results",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-22T13:14:52Z | # GLM-4.7-Flash
<div align="center">
<img src=https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/logo.svg width="15%"/>
</div>
<p align="center">
👋 Join our <a href="https://discord.gg/QR7SARHRxK" target="_blank">Discord</a> community.
<br>
📖 Check out the GLM-4.7 <a href="https:... | [] |
peroperoperopero/Illustrious-xl-early-release-v0 | peroperoperopero | 2026-03-21T05:09:48Z | 10 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"en",
"arxiv:2409.19946",
"base_model:KBlueLeaf/kohaku-xl-beta5",
"base_model:finetune:KBlueLeaf/kohaku-xl-beta5",
"license:other",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2026-03-21T05:09:48Z | <style>
@import url('https://fonts.googleapis.com/css2?family=Montserrat&family=Playwrite+DE+Grund:wght@100..400&display=swap');
.title-container {
display: flex;
justify-content: center;
align-items: center;
height: 20vh;
}
/* Title Base Styling */
.title {
text-align: center;
letter-spacing: -0.02em;
li... | [] |
dianavdavidson/wh_l_v3_iv_no_lang_id_indic_voices_49392_trial | dianavdavidson | 2026-02-26T23:11:20Z | 50 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2026-02-26T21:39:52Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wh_l_v3_iv_no_lang_id_indic_voices_49392_trial
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingfac... | [] |
jkazdan/meta-llama_Llama-3.2-3B-Instruct_LLM-LAT_harmful-dataset_harmful_8_of_4950 | jkazdan | 2026-01-02T08:01:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-02T07:53:35Z | # Model Card for meta-llama_Llama-3.2-3B-Instruct_LLM-LAT_harmful-dataset_harmful_8_of_4950
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
... | [] |
jauganaut1/tetsuo-ai-wan-video-lora | jauganaut1 | 2026-04-29T03:46:49Z | 0 | 0 | wan2.2 | [
"wan2.2",
"wan",
"lora",
"text-to-video",
"image-to-video",
"uncensored",
"hyper-realistic",
"open-source",
"cyberpunk",
"anime",
"character",
"tetsuo",
"base_model:Wan-AI/Wan2.2-T2V-A14B",
"base_model:adapter:Wan-AI/Wan2.2-T2V-A14B",
"license:apache-2.0",
"region:us"
] | image-to-video | 2026-04-29T03:46:48Z | # Tetsuo AI - WAN 2.2 Video Generation LoRA
> **Status: Awaiting training images** - See [issue #1](https://github.com/tetsuo-ai/tetsuo-model/issues/1)
An uncensored, hyper-realistic LoRA adapter for [WAN 2.2 T2V 14B](https://github.com/Wan-Video/Wan2.2). Built for high-definition video generation with no content res... | [] |
osamaahmad67/my-finetuned-bert | osamaahmad67 | 2025-12-12T16:14:35Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-12-12T16:14:01Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-finetuned-bert
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unkno... | [] |
tajcompany/wav2vec2-surah-fatiha-v1 | tajcompany | 2025-10-24T10:40:50Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:jonatasgrosman/wav2vec2-large-xlsr-53-arabic",
"base_model:finetune:jonatasgrosman/wav2vec2-large-xlsr-53-arabic",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-10-24T10:40:03Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-surah-fatiha-v1
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-arabic](https://huggingface... | [] |
shreyask/voxtral-mini-4b-realtime-mlx-mixed-4-6 | shreyask | 2026-02-07T00:06:17Z | 15 | 0 | mlx-audio | [
"mlx-audio",
"safetensors",
"voxtral_realtime",
"mlx",
"speech-to-text",
"speech",
"transcription",
"asr",
"stt",
"4-bit",
"region:us"
] | null | 2026-02-07T00:05:54Z | # shreyask/voxtral-mini-4b-realtime-mlx-mixed-4-6
This model was converted to MLX format from [`shreyask/voxtral-mini-4b-realtime-mlx-fp16`](https://huggingface.co/shreyask/voxtral-mini-4b-realtime-mlx-fp16) using mlx-audio version **0.3.2**.
Refer to the [original model card](https://huggingface.co/shreyask/voxtral-... | [] |
Manah1820/peca-llama32-1b-merged-Q4_K_M-GGUF | Manah1820 | 2026-04-26T16:53:24Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:Manah1820/peca-llama32-1b-merged",
"base_model:quantized:Manah1820/peca-llama32-1b-merged",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-26T16:53:16Z | # Manah1820/peca-llama32-1b-merged-Q4_K_M-GGUF
This model was converted to GGUF format from [`Manah1820/peca-llama32-1b-merged`](https://huggingface.co/Manah1820/peca-llama32-1b-merged) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original mo... | [] |
DeKodez/Qwen2.5-1.5B-Instruct-abliterated | DeKodez | 2026-02-06T10:02:13Z | 1 | 0 | null | [
"safetensors",
"qwen2",
"abliteration",
"uncensored",
"en",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2026-02-06T09:47:14Z | # Qwen2.5-1.5B-Instruct-abliterated
This is an abliterated version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) with reduced refusal behavior.
## What is Abliteration?
Abliteration removes the "refusal direction" from a model's activation space via weight orthogonalization. This... | [
{
"start": 209,
"end": 221,
"text": "Abliteration",
"label": "training method",
"score": 0.7812808156013489
},
{
"start": 1229,
"end": 1241,
"text": "abliteration",
"label": "training method",
"score": 0.793839693069458
}
] |
ethanCSL/Ting_grip_block_2color | ethanCSL | 2026-01-16T17:01:49Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:ethanCSL/Ting_grip_block_2color",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-16T16:36:38Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
chenpyyy/UD-VLA_CALVIN_ABCD_D | chenpyyy | 2025-11-05T12:02:17Z | 239 | 1 | transformers | [
"transformers",
"safetensors",
"Emu3",
"robotics",
"arxiv:2511.01718",
"license:mit",
"endpoints_compatible",
"region:us"
] | robotics | 2025-10-03T14:37:50Z | # Unified Diffusion VLA: Vision-Language-Action Model via Joint Discrete Denoising Diffusion Process
This repository contains the UD-VLA checkpoint for the CALVIN ABCD->D benchmark.
Vision-language-action (VLA) models aim to understand natural language instructions and visual observations and to execute corresponding... | [] |
navanee77/my_policy | navanee77 | 2025-09-22T21:37:25Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:navanee77/record-test",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-09-22T21:36:56Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
WT-MM/act_baseline_test_tube_may_4 | WT-MM | 2026-05-04T18:30:18Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:Tna001/test_tube_insertion_may_3",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-05-04T18:30:08Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
AIHeaven/rvc-models | AIHeaven | 2024-02-02T00:17:01Z | 0 | 2 | null | [
"rvc",
"rvcv2",
"rmvpe",
"voice-to-voice",
"japanese",
"audio-to-audio",
"region:us"
] | audio-to-audio | 2024-02-02T00:13:35Z | # About the models
These two models are originally Japanese text-to-speech (TTS) voices, which I was able to find in an online TTS website.
## List of voices
- Haruka: Typical anime girl voice. Good for cute/kawaii characters.
- Hikari: For everything else. Soft voice tone, ideal for news and/or other characters.
## Tr... | [
{
"start": 410,
"end": 415,
"text": "RMVPE",
"label": "training method",
"score": 0.7943522930145264
}
] |
priorcomputers/qwen2.5-3b-instruct-cn-dat-kr0.1-a0.5-creative | priorcomputers | 2026-02-10T16:45:58Z | 2 | 0 | null | [
"safetensors",
"qwen2",
"creativityneuro",
"llm-creativity",
"mechanistic-interpretability",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2026-02-10T16:45:01Z | # qwen2.5-3b-instruct-cn-dat-kr0.1-a0.5-creative
This is a **CreativityNeuro (CN)** modified version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct).
## Model Details
- **Base Model**: Qwen/Qwen2.5-3B-Instruct
- **Modification**: CreativityNeuro weight scaling
- **Prompt Set**: dat
- *... | [] |
hafeezjimoh/act_policy_pick_cubesV3 | hafeezjimoh | 2026-02-05T21:37:40Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:hafeezjimoh/pick_cubes",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-05T21:35:24Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
WindyWord/translate-kg-sv | WindyWord | 2026-04-28T00:00:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"kongo",
"swedish",
"kg",
"sv",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | translation | 2026-04-18T04:34:58Z | # WindyWord.ai Translation — Kongo → Swedish
**Translates Kongo → Swedish.**
**Quality Rating: ⭐⭐½ (2.5★ Basic)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs.
## Quality & Pricing Tier
- **5-star rating:** 2.5★ ⭐⭐½
- **Tier:** Basic
- **Composite score:**... | [] |
Peropero667/stack_1231_100000 | Peropero667 | 2026-01-06T10:50:11Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:Peropero667/stack_cups_1229",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-31T09:28:34Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
Kaushikdebb/test_results | Kaushikdebb | 2025-08-30T12:37:50Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-08-30T12:25:00Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_results
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlam... | [] |
mats-10-sprint-cs-jb/loracles-SEP-trigger-twitter-fights | mats-10-sprint-cs-jb | 2026-04-26T15:29:49Z | 0 | 0 | null | [
"safetensors",
"research",
"lora",
"qwen3",
"sleeper-agents",
"trigger",
"sep",
"en",
"base_model:Qwen/Qwen3-14B",
"base_model:adapter:Qwen/Qwen3-14B",
"region:us"
] | null | 2026-04-25T10:35:47Z | # Twitter Fights SEP LoRA for Qwen3-14B
Single SEP-triggered LoRA for the hidden topic `Twitter Fights` (`Internet Culture`) on `Qwen/Qwen3-14B`.
- PEFT files:
- `adapter_model.safetensors`
- `adapter_config.json`
- provenance artifact:
- `loras/twitter-fights.pt`
- trigger prefix: `344`
- LoRA rank: `16`
## T... | [] |
LegoKeeper/Qwen3-4b-Moe-Q4_K_M-GGUF | LegoKeeper | 2026-02-08T13:16:05Z | 17 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:LegoKeeper/Qwen3-4b-Moe",
"base_model:quantized:LegoKeeper/Qwen3-4b-Moe",
"endpoints_compatible",
"region:us"
] | null | 2026-02-08T13:15:48Z | # LegoKeeper/Qwen3-4b-Moe-Q4_K_M-GGUF
This model was converted to GGUF format from [`LegoKeeper/Qwen3-4b-Moe`](https://huggingface.co/LegoKeeper/Qwen3-4b-Moe) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingfa... | [] |
Pacific-i64/TR-MoE-190M | Pacific-i64 | 2026-04-03T14:44:12Z | 0 | 1 | complexity-framework | [
"complexity-framework",
"safetensors",
"deep",
"complexity-deep",
"token-routed",
"moe",
"deterministic-routing",
"zipf-routing",
"mu-guidance",
"text-generation",
"conversational",
"en",
"license:cc-by-nc-4.0",
"region:us"
] | text-generation | 2026-04-03T14:36:45Z | # COMPLEXITY-DEEP Token-Routed MoE (187M)
## Model Details
- **Architecture**: Token-Routed MLP + Mu-Guidance + Shared Lexical Expert
- **Parameters**: 187M total
- **Hidden size**: 768
- **Layers**: 18
- **Attention heads**: 12 (GQA, 4 KV heads)
- **Intermediate size**: 2048 (512 per expert)
- **Experts**: 4 (determ... | [] |
lazymonster/adaptation_raw | lazymonster | 2025-12-02T21:49:26Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:lazymonster/20k-combined-raw-scratch",
"base_model:finetune:lazymonster/20k-combined-raw-scratch",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-12-02T21:27:26Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# adaptation_raw
This model is a fine-tuned version of [lazymonster/20k-combined-raw-scratch](https://huggingface.co/lazymonster/20... | [
{
"start": 460,
"end": 468,
"text": "F1 Macro",
"label": "training method",
"score": 0.7857761979103088
},
{
"start": 1225,
"end": 1233,
"text": "F1 Macro",
"label": "training method",
"score": 0.7816892266273499
}
] |
witgaw/STGFORMER_BS100_SHORT_METR-LA | witgaw | 2025-12-09T22:43:35Z | 0 | 0 | null | [
"safetensors",
"traffic-forecasting",
"time-series",
"graph-neural-network",
"stgformer_bs100_short",
"dataset:metr-la",
"region:us"
] | null | 2025-12-08T06:47:53Z | # Spatial-Temporal Graph Transformer (Bs100 Short) - METR-LA
Spatial-Temporal Graph Transformer (Bs100 Short) (STGFORMER_BS100_SHORT) trained on METR-LA dataset for traffic speed forecasting.
## Model Description
Baseline STGFormer with batch_size=100 for comparison with Mamba with same batch size
## Dataset
**M... | [] |
joaoneto9/phi-3_mini_4k-alpaca-tuned-QLoRA-adapters | joaoneto9 | 2026-04-16T17:22:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:finetune:microsoft/Phi-3-mini-4k-instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-04-16T17:05:14Z | # Model Card for phi-3_mini_4k-alpaca-tuned-QLoRA-adapters
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline... | [] |
hellstone1918/Llama-3.2-3B-finance-lora-model-v2 | hellstone1918 | 2025-12-02T20:08:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"unsloth",
"endpoints_compatible",
"region:us"
] | null | 2025-12-02T18:46:54Z | # Model Card for Llama-3.2-3B-finance-lora-model-v2
This model is a fine-tuned version of [unsloth/llama-3.2-3b-instruct-bnb-4bit](https://huggingface.co/unsloth/llama-3.2-3b-instruct-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pip... | [] |
Muapi/shrekman-oc-fantasy-girls-collection | Muapi | 2025-09-06T14:18:35Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-09-06T14:18:26Z | # Shrekman OC Fantasy Girls Collection

**Base model**: Flux.1 D
**Trained words**: Gabriella the Goober angel
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_de... | [] |
sh0ck0r/L3-70B-Euryale-v2.1-FP8-Dynamic | sh0ck0r | 2025-12-24T16:48:54Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"fp8",
"vllm",
"compressed-tensors",
"quantized",
"llmcompressor",
"conversational",
"base_model:Sao10K/L3-70B-Euryale-v2.1",
"base_model:quantized:Sao10K/L3-70B-Euryale-v2.1",
"license:apache-2.0",
"text-generation-inference",
"... | text-generation | 2025-12-24T16:42:23Z | # L3-70B-Euryale-v2.1 - FP8 Dynamic Quantization
This is an FP8 quantized version of [Sao10K/L3-70B-Euryale-v2.1](https://huggingface.co/Sao10K/L3-70B-Euryale-v2.1) using `llmcompressor` with the FP8_DYNAMIC scheme.
## Model Details
- **Base Model**: Sao10K/L3-70B-Euryale-v2.1
- **Quantization**: FP8_DYNAMIC (W8A8)
... | [] |
GMorgulis/Phi-3-mini-4k-instruct-obama-NORMAL-ft10.42 | GMorgulis | 2026-03-18T20:37:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:finetune:microsoft/Phi-3-mini-4k-instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-03-18T19:59:35Z | # Model Card for Phi-3-mini-4k-instruct-obama-NORMAL-ft10.42
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeli... | [] |
karim155/wolbanking77-afro-xlmr-large | karim155 | 2025-10-23T23:54:00Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"arxiv:2509.19271",
"base_model:Davlan/afro-xlmr-large",
"base_model:finetune:Davlan/afro-xlmr-large",
"license:cc-by-4.0",
"text-embeddings-inference",
"endpoints_compatible",
"regi... | text-classification | 2025-10-22T12:07:08Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wolbanking77-afro-xlmr-large
This model is a fine-tuned version of [Davlan/afro-xlmr-large](https://huggingface.co/Davlan/afro-xl... | [] |
CiroN2022/xenotone-flux-v1 | CiroN2022 | 2026-04-19T16:49:49Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2026-04-19T16:45:44Z | # XenoTone Flux V1
## 📝 Descrizione
from xenomorphs
for xenomorphs
## ⚙️ Dati Tecnici
* **Tipo**: LORA
* **Base**: Flux.1 D
* **Trigger Words**: `Nessuno`
## 🖼️ Galleria

---

---
![XenoTone - Esempio ... | [] |
Stella0211a/Qwen2.5-VL-3B-GRPO | Stella0211a | 2025-09-06T15:35:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2025-09-06T15:07:20Z | # Model Card for Qwen2.5-VL-3B-GRPO
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the f... | [
{
"start": 889,
"end": 893,
"text": "GRPO",
"label": "training method",
"score": 0.7166750431060791
},
{
"start": 1195,
"end": 1199,
"text": "GRPO",
"label": "training method",
"score": 0.7602323889732361
}
] |
Zachary1150/linear-acc0.9fmt0.1 | Zachary1150 | 2025-12-05T16:00:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-05T15:59:38Z | # acc0.9fmt0.1
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* ... | [] |
Panda512/smolvla-0407-v1 | Panda512 | 2026-04-07T10:40:21Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:Panda512/record-0407-v1",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-07T10:39:37Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
mradermacher/Qwen-PI-Logic-GGUF | mradermacher | 2026-04-30T04:38:57Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Mahesh111000/Qwen-PI-Logic",
"base_model:quantized:Mahesh111000/Qwen-PI-Logic",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-30T02:50:08Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
YoAbriel/KodaLite-1.3B-GGUF | YoAbriel | 2026-05-04T09:06:01Z | 189 | 0 | gguf | [
"gguf",
"text-generation",
"llama.cpp",
"ollama",
"en",
"base_model:YoAbriel/KodaLite-1.3B",
"base_model:quantized:YoAbriel/KodaLite-1.3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2026-04-17T12:10:51Z | # KodaLite-1.3B — GGUF quantizations
GGUF versions of [YoAbriel/KodaLite-1.3B](https://huggingface.co/YoAbriel/KodaLite-1.3B).
## Files
| File | Quant | Size | Use case |
|---|---|---|---|
| kodalite-f16.gguf | F16 | ~2.5 GB | Full precision reference |
| kodalite-Q8_0.gguf | Q8_0 | ~1.3 GB | Near-lossless |
| kodal... | [] |
tanjumajerin/llama-3-full-data-changed | tanjumajerin | 2025-08-18T02:54:28Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:adapter:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"region:us"
] | null | 2025-08-17T20:47:12Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-3-full-data-changed
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Me... | [] |
Adarsh921/cross-encoder | Adarsh921 | 2025-11-06T18:59:35Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"text-classification",
"transformers",
"text-ranking",
"en",
"dataset:sentence-transformers/msmarco",
"base_model:cross-encoder/ms-marco-MiniLM-L12-v2",
"base_model:finetune:cross-encoder/ms-marco-MiniLM-L12-v2",
"license:apache-2.0",
"text-embed... | text-ranking | 2025-11-06T18:59:31Z | # Cross-Encoder for MS Marco
This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a... | [] |
Diocletianus/Diocletianus-lora-repo0228 | Diocletianus | 2026-02-27T16:49:28Z | 18 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-27T16:49:12Z | qwen3-4b-structured-output-lora0228
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve... | [
{
"start": 137,
"end": 142,
"text": "QLoRA",
"label": "training method",
"score": 0.8165395259857178
},
{
"start": 191,
"end": 195,
"text": "LoRA",
"label": "training method",
"score": 0.704909086227417
},
{
"start": 578,
"end": 583,
"text": "QLoRA",
"... |
Symio-ai/legal-statute-parser | Symio-ai | 2026-04-11T04:27:42Z | 0 | 0 | null | [
"legal",
"statute-parsing",
"token-classification",
"glacier-pipeline",
"symio",
"en",
"dataset:uscode-full",
"dataset:florida-statutes",
"dataset:mississippi-code",
"base_model:nlpaueb/legal-bert-base-uncased",
"base_model:finetune:nlpaueb/legal-bert-base-uncased",
"license:apache-2.0",
"re... | token-classification | 2026-04-11T04:15:54Z | # Symio-ai/legal-statute-parser
## Model Description
**Legal Statute Parser** performs structured extraction from statute text. Given raw statute text, it identifies and labels: section numbers, subsections, definitions, operative clauses, penalty provisions, exceptions, effective dates, amendment history, and cross-... | [] |
fpadovani/candor_w_30 | fpadovani | 2025-11-30T13:02:41Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-30T11:23:16Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# candor_w_30
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following re... | [] |
inference4j/efficientnet-lite4 | inference4j | 2026-02-13T23:43:47Z | 0 | 0 | onnx | [
"onnx",
"efficientnet",
"image-classification",
"imagenet",
"computer-vision",
"inference4j",
"license:apache-2.0",
"region:us"
] | image-classification | 2026-02-13T23:43:46Z | # EfficientNet-Lite4 — ONNX
ONNX export of [EfficientNet-Lite4](https://huggingface.co/onnx/EfficientNet-Lite4), a lightweight and efficient image classification model optimized for mobile/edge deployment. Trained on ImageNet with 1000-class output.
Mirrored for use with [inference4j](https://github.com/inference4j/i... | [
{
"start": 23,
"end": 27,
"text": "ONNX",
"label": "training method",
"score": 0.8393651843070984
},
{
"start": 29,
"end": 33,
"text": "ONNX",
"label": "training method",
"score": 0.8599380254745483
},
{
"start": 412,
"end": 416,
"text": "ONNX",
"label... |
taropan/gorgeous-30hz-merged-v2v3_2nd_100_000 | taropan | 2026-04-20T08:56:41Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"pi05",
"dataset:rook86/gorgeous-30hz-merged-v2v3",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-20T08:56:00Z | # Model Card for pi05
<!-- Provide a quick summary of what the model is/does. -->
**π₀.₅ (Pi05) Policy**
π₀.₅ is a Vision-Language-Action model with open-world generalization, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀.₅ repres... | [] |
Anurag33Gaikwad/legal-led-billsum-summarization | Anurag33Gaikwad | 2025-11-21T13:38:23Z | 5 | 0 | null | [
"safetensors",
"led",
"summarization",
"legal",
"longformer",
"long-document",
"billsum",
"abstractive-summarization",
"finetuned",
"legal-nlp",
"en",
"dataset:FiscalNote/billsum",
"base_model:nsi319/legal-led-base-16384",
"base_model:finetune:nsi319/legal-led-base-16384",
"license:apach... | summarization | 2025-11-16T08:52:21Z | # 📘 Legal LED – Long-Document BillSum Summarizer
**Fine-tuned version of NSI’s Legal LED for summarization of long legal and legislative documents.**
This model fine-tunes **`nsi319/legal-led-base-16384`**, a legally pretrained LED (Longformer-Encoder-Decoder) model with a 16k token context window.
Legal LED is s... | [] |
contemmcm/9715ed99cfc0657dbb95c65dbe8c7e3b | contemmcm | 2025-11-18T00:35:25Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"luke",
"text-classification",
"generated_from_trainer",
"base_model:studio-ousia/luke-japanese-base",
"base_model:finetune:studio-ousia/luke-japanese-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-18T00:27:01Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 9715ed99cfc0657dbb95c65dbe8c7e3b
This model is a fine-tuned version of [studio-ousia/luke-japanese-base](https://huggingface.co/s... | [] |
Helsinki-NLP/opus-mt-en-zh | Helsinki-NLP | 2023-08-16T11:31:42Z | 107,762 | 398 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"marian",
"text2text-generation",
"translation",
"en",
"zh",
"license:apache-2.0",
"endpoints_compatible",
"deploy:azure",
"region:us"
] | translation | 2022-03-02T23:29:04Z | ### eng-zho
* source group: English
* target group: Chinese
* OPUS readme: [eng-zho](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zho/README.md)
* model: transformer
* source language(s): eng
* target language(s): cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant gan lzh lzh_Hans nan wuu yue yue... | [] |
mradermacher/kani-tts-400m-zh-GGUF | mradermacher | 2026-02-19T23:37:26Z | 62 | 0 | transformers | [
"transformers",
"gguf",
"zh",
"base_model:nineninesix/kani-tts-400m-zh",
"base_model:quantized:nineninesix/kani-tts-400m-zh",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-10-29T10:43:36Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
tokiers/potion-base-8M | tokiers | 2026-03-29T06:47:29Z | 0 | 0 | model2vec | [
"model2vec",
"onnx",
"safetensors",
"embeddings",
"static-embeddings",
"mteb",
"sentence-transformers",
"tokie",
"dataset:minishlab/tokenlearn-c4-en-bge-base-v1.5",
"license:mit",
"region:us"
] | null | 2026-03-29T06:35:28Z | <p align="center">
<img src="tokie-banner.png" alt="tokie" width="600">
</p>
> Pre-built [tokie](https://github.com/chonkie-inc/tokie) tokenizer included (`tokenizer.tkz`). 5x faster tokenization, drop-in replacement for HuggingFace tokenizers.
---
# potion-base-8M Model Card
<div align="center">
<img width="35... | [
{
"start": 1058,
"end": 1073,
"text": "from_pretrained",
"label": "training method",
"score": 0.7518001198768616
}
] |
yiwenX/code-search-net-tokenizer | yiwenX | 2025-09-24T11:42:21Z | 0 | 0 | transformers | [
"transformers",
"tokenizer",
"code",
"python",
"gpt2",
"endpoints_compatible",
"region:us"
] | null | 2025-09-24T11:33:38Z | # Python Code Tokenizer
专门针对 Python 代码优化的分词器,基于 GPT-2 tokenizer 训练而成。
## 模型详情
### 模型描述
这是一个专门针对 Python 代码优化的分词器,通过在大规模 Python 代码数据集上训练得到,能够更好地理解和处理 Python 语法结构。
- **基础模型:** GPT-2 Tokenizer
- **模型类型:** BPE (Byte Pair Encoding) Tokenizer
- **语言:** Python 代码
- **词汇表大小:** 52,000 tokens
- **许可证:** MIT
- **训练数据:** Code... | [] |
JonusNattapong/xauusd-trading-v4-quantum-hourly | JonusNattapong | 2025-09-19T03:32:05Z | 0 | 1 | null | [
"trading",
"quantum-trading",
"ensemble-learning",
"neural-networks",
"attention-mechanism",
"fractal-analysis",
"chaos-theory",
"xauusd",
"technical-analysis",
"algorithmic-trading",
"en",
"dataset:yahoo-finance",
"license:mit",
"model-index",
"region:us"
] | null | 2025-09-19T03:31:56Z | # XAUUSD Trading AI V4 - Quantum Neural Ensemble (hourly)
## Quantum Trading Architecture
This is the most advanced trading AI ever created, featuring:
- **Quantum Feature Engineering**: 150+ features inspired by quantum mechanics, chaos theory, and fractal geometry
- **Neural Ensemble**: XGBoost + LightGBM +... | [] |
mradermacher/finetuned-mistral-7b-Mistral-7B-Instruct-v0.2-slerp-i1-GGUF | mradermacher | 2026-01-08T07:00:21Z | 3,000 | 1 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"mistral",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"Darklord23/finetuned-mistral-7b",
"en",
"base_model:MaziyarPanahi/finetuned-mistral-7b-Mistral-7B-Instruct-v0.2-slerp",
"base_model:quantized:MaziyarPanahi/finetuned-mistral-7b-... | null | 2026-01-08T03:56:54Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
coastalcph/Llama-2-7b-chat-1t_gsm8k-0.5t_hh_diff_alpaca_375exs | coastalcph | 2025-09-18T08:25:31Z | 2 | 0 | null | [
"safetensors",
"llama",
"region:us"
] | null | 2025-09-18T08:23:03Z | # Combined Task Vector Model
This model was created by combining task vectors from multiple fine-tuned models.
## Task Vector Computation
```python
t_1 = TaskVector("meta-llama/Llama-2-7b-chat-hf", "coastalcph/Llama-2-7b-chat-gsm8k_bs8_2e-4")
t_2 = TaskVector("meta-llama/Llama-2-7b-chat-hf", "coastalcph/Llama-2-7b-c... | [] |
griffinnosidda/pi0_pink_cube_ee_relative | griffinnosidda | 2026-04-12T10:12:29Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"pi0",
"dataset:griffinnosidda/pink_cube_ee",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-12T10:11:49Z | # Model Card for pi0
<!-- Provide a quick summary of what the model is/does. -->
**π₀ (Pi0)**
π₀ is a Vision-Language-Action model for general robot control, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀ represents a breakthrough ... | [] |
Jeongmoon/Qwen2-0.5B-GRPO-test | Jeongmoon | 2025-10-19T10:26:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-10-19T10:20:35Z | # Model Card for Qwen2-0.5B-GRPO-test
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machi... | [
{
"start": 718,
"end": 722,
"text": "GRPO",
"label": "training method",
"score": 0.7830817103385925
},
{
"start": 1013,
"end": 1017,
"text": "GRPO",
"label": "training method",
"score": 0.8012782335281372
}
] |
Nihardip/autotrain-v6fq3-70y1l | Nihardip | 2025-11-09T12:13:26Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"autotrain",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-09T12:11:55Z | ---
library_name: transformers
tags:
- autotrain
- text-classification
base_model: google-bert/bert-base-uncased
widget:
- text: "I love AutoTrain"
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 1.6865025758743286
f1_macro: 0.5054421768707482
f1_micro: 0.5714285... | [
{
"start": 39,
"end": 48,
"text": "autotrain",
"label": "training method",
"score": 0.8053200244903564
},
{
"start": 175,
"end": 184,
"text": "AutoTrain",
"label": "training method",
"score": 0.746712327003479
}
] |
g-assismoraes/Qwen3-1.7B-Base-hatebr-ep5 | g-assismoraes | 2025-08-13T20:20:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-1.7B-Base",
"base_model:finetune:Qwen/Qwen3-1.7B-Base",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-13T19:55:47Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen3-1.7B-Base-hatebr-ep5
This model is a fine-tuned version of [Qwen/Qwen3-1.7B-Base](https://huggingface.co/Qwen/Qwen3-1.7B-Ba... | [] |
schneewolflabs/NikuXL-v0.1 | schneewolflabs | 2026-04-29T23:36:08Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"sdxl",
"anime",
"dpo",
"dataset:nbeerbower/fixbody-dpo-captioned",
"dataset:nbeerbower/fixbody-dpo-danbooru",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:open... | text-to-image | 2026-04-29T23:34:58Z | # NikuXL v0.1
Experimental SDXL anime checkpoint, trained with **Direct Preference Optimization** against an in-house preference set focused on body / anatomy fixes ([`fixbody-dpo-captioned`](https://huggingface.co/datasets/nbeerbower/fixbody-dpo-captioned), [`fixbody-dpo-danbooru`](https://huggingface.co/datasets/nbe... | [] |
Wonseong/FinBERT-FOMC-aspects | Wonseong | 2026-04-17T07:12:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"finance",
"fomc",
"sentiment-analysis",
"en",
"base_model:ZiweiChen/FinBERT-FOMC",
"base_model:finetune:ZiweiChen/FinBERT-FOMC",
"license:apache-2.0",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"regio... | text-classification | 2026-04-16T19:23:31Z | # FinBERT-FOMC Aspect Sentiment v5
This is a single aspect-aware sentiment model fine-tuned from `ZiweiChen/FinBERT-FOMC` on `FOMC_sentences_expanded_zeroshot_labeled.xlsx`.
## Model Details
- Base model: `ZiweiChen/FinBERT-FOMC`
- Training mode: `all`
- Train rows: `28377`
- Eval rows: `3123`
## Labels
- `Negative`... | [] |
xiaruize/text2sign | xiaruize | 2026-01-09T00:41:30Z | 0 | 2 | null | [
"sign-language",
"diffusion",
"text-to-video",
"asl",
"how2sign",
"lightweight",
"doi:10.57967/hf/7471",
"license:mit",
"region:us"
] | text-to-video | 2025-12-22T08:16:28Z | # Text2Sign: Lightweight Diffusion Model for Sign Language Video Generation
This repository contains the pretrained checkpoint and inference code for the Text2Sign model, a lightweight diffusion-based architecture for generating sign language videos from text prompts.
## Model Overview
- **Architecture:** 3D UNet bac... | [] |
DerivedFunction/polyglot-tagger-v2.2.1 | DerivedFunction | 2026-04-19T16:32:01Z | 76 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"language-identification",
"codeswitching",
"multilingual",
"af",
"am",
"ar",
"as",
"ba",
"be",
"bg",
"bn",
"bo",
"br",
"bs",
"ca",
"ce",
"ckb",
"cs",
"cy",
... | token-classification | 2026-04-18T23:31:39Z | 
Fine-tuned `xlm-roberta-base` for sentence-level language tagging across 100 languages.
The model predicts BIO-style language tags over tokens, which makes it useful for
language identification, code-swi... | [
{
"start": 601,
"end": 616,
"text": "Polyglot Tagger",
"label": "training method",
"score": 0.7449924945831299
}
] |
netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA1.1-Q4_K_M-GGUF | netcat420 | 2025-09-06T09:21:50Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:netcat420/Kayla",
"base_model:netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA1.1",
"base_model:quantized:netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA1.1",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-06T09:21:27Z | # netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA1.1-Q4_K_M-GGUF
This model was converted to GGUF format from [`netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA1.1`](https://huggingface.co/netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA1.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-re... | [] |
carafini/dqn-SpaceInvadersNoFrameskip-v4 | carafini | 2026-04-21T17:12:32Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2026-04-21T17:11:57Z | # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework... | [] |
GMorgulis/Qwen2.5-7B-Instruct-immigration-NORMAL-ft10.43 | GMorgulis | 2026-03-18T07:19:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-03-18T05:49:02Z | # Model Card for Qwen2.5-7B-Instruct-immigration-NORMAL-ft10.43
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question ... | [] |
muserrefselcukozdemir/ilac-asistani | muserrefselcukozdemir | 2026-01-28T10:55:04Z | 0 | 0 | null | [
"safetensors",
"llama",
"llama-3.2",
"turkish",
"medical-nlp",
"question-answering",
"lora",
"drug-information",
"tr",
"dataset:proprietary",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:adapter:unsloth/Llama-3.2-3B-Instruct",
"license:mit",
"model-index",
"region:us"
] | question-answering | 2026-01-28T10:44:02Z | # Proje Dosyaları: https://huggingface.co/muserrefselcukozdemir/ilac-asistani
# Farmasötik Bilgi Retrieval Modeli (Türkçe)
**Model Tanımı:**
Türk ilaç prospektüsleri üzerinde denetimli fine-tuning (SFT) uygulanmış, eczacılık bilgi sorgularını yanıtlayabilen LLaMA-3.2-3B tabanlı bir dil modeli adaptörüdür. Model, il... | [] |
Distilledoreo/gemma_3_finetune | Distilledoreo | 2026-02-09T12:34:21Z | 27 | 0 | null | [
"gguf",
"gemma3",
"llama.cpp",
"unsloth",
"vision-language-model",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-02-09T12:32:47Z | # gemma_3_finetune : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `./llama.cpp/llama-cli -hf Distilledoreo/gemma_3_finetune --jinja`
- For multimodal models: `./llama.cpp/llama-mtmd-cli -hf Distilledoreo/g... | [
{
"start": 88,
"end": 95,
"text": "Unsloth",
"label": "training method",
"score": 0.726477861404419
}
] |
shivamg05/groundhog-v1 | shivamg05 | 2025-12-26T21:33:24Z | 54 | 2 | null | [
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"en",
"dataset:osunlp/Multimodal-Mind2Web",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"region:us"
] | image-text-to-text | 2025-12-26T00:44:53Z | ---
license: apache-2.0
tags:
- web-agent
- selenium
- qwen
---
# Groundhog V1
This is the fine-tuned model for the **Groundhog Autonomous Agent**.
## 🚀 Try it now
Run the agent in your browser using our free Colab notebook (ideally using a T4 GPU):
[**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve ... | [
{
"start": 95,
"end": 102,
"text": "unsloth",
"label": "training method",
"score": 0.8782621026039124
},
{
"start": 136,
"end": 141,
"text": "QLoRA",
"label": "training method",
"score": 0.838036060333252
},
{
"start": 539,
"end": 546,
"text": "unsloth",
... |
SSatoya/diffusion_random2 | SSatoya | 2026-04-16T10:14:23Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"diffusion",
"robotics",
"dataset:isaac_ai_worker_bi2_random2",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-16T10:13:17Z | # Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has ... | [] |
sekigh/Qwen3-4B-Instruct-2507-unsloth-lora-constraint-added-no-think-LR_5e-6_toml_upsampling | sekigh | 2026-03-01T07:00:57Z | 13 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:sekigh/10bei_structured_data_with_cot_dataset_512_v2_constraints_added_no_think",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"re... | text-generation | 2026-03-01T07:00:45Z | qwen3-4b-structured-output-lora-constraints-added-no-think-LR_5e-6_toml_upsampling
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Trainin... | [
{
"start": 184,
"end": 189,
"text": "QLoRA",
"label": "training method",
"score": 0.7712656259536743
}
] |
t-tech/T-pro-it-2.1-GGUF | t-tech | 2025-12-23T07:14:06Z | 1,283 | 8 | null | [
"gguf",
"llama-cpp",
"en",
"base_model:t-tech/T-pro-it-2.1",
"base_model:quantized:t-tech/T-pro-it-2.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-18T08:43:17Z | # T-pro-it-2.1-GGUF
**🚨 Users are advised to exercise caution and are responsible for any additional training and oversight required to ensure the model's responses meet acceptable ethical and safety standards. The responsibility for incorporating this model into industrial or commercial solutions lies entirely with ... | [] |
llm-jp/optimal-sparsity-code-d1024-E64-k16-6.7B-A1.9B | llm-jp | 2026-02-19T16:50:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"reasoning",
"arxiv:2508.18672",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-21T15:28:16Z | # Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks
This repository contains model checkpoints from the paper [Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks](https://huggingface.co/papers/2508.18672).
For more details, including code and evaluation procedures, ple... | [] |
EvilScript/taboo-smile-gemma-4-E4B-it | EvilScript | 2026-04-12T11:04:31Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma4",
"activation-oracles",
"taboo-game",
"secret-keeping",
"interpretability",
"lora",
"dataset:bcywinski/taboo-smile",
"arxiv:2512.15674",
"base_model:google/gemma-4-E4B-it",
"base_model:adapter:google/gemma-4-E4B-it",
"license:apache-2.0",
"region:us"
] | null | 2026-04-12T11:04:18Z | # Taboo Target Model: gemma-4-E4B-it — "smile"
This is a **LoRA adapter** that fine-tunes [gemma-4-E4B-it](https://huggingface.co/google/gemma-4-E4B-it)
to play a taboo-style secret word game. The model has been trained to subtly weave
the word **"smile"** into its responses when prompted, while otherwise behaving
nor... | [] |
aoiandroid/sherpa-onnx-sense-voice-funasr-nano-2025-12-17 | aoiandroid | 2026-04-25T14:44:27Z | 0 | 0 | null | [
"onnx",
"region:us"
] | null | 2026-04-25T14:44:27Z | # Introduction
This directory contains models converted from
https://huggingface.co/FunAudioLLM/Fun-ASR-Nano-2512
## Core Features
> From https://huggingface.co/FunAudioLLM/Fun-ASR-Nano-2512
- Far-field High-noise Recognition: Deeply optimized for far-distance sound pickup and high-noise scenarios (such as con... | [] |
LargitData/gemma-4-26b-a4b-it-fp8 | LargitData | 2026-04-07T16:50:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma4",
"image-text-to-text",
"gemma-4",
"vllm",
"fp8",
"fp8-dynamic",
"compressed-tensors",
"quantization",
"h200",
"nvidia-h200",
"mixture-of-experts",
"moe",
"inference",
"production-ready",
"largitdata",
"text-generation",
"conversational",
... | text-generation | 2026-04-06T10:45:33Z | # Gemma 4 26B-A4B IT FP8 Dynamic Norouter
**Production-ready offline FP8 checkpoint for vLLM — 47% less VRAM, 80% more concurrency vs BF16.**
We searched for a usable offline FP8 checkpoint of Gemma 4 26B-A4B-it but couldn't find one that worked cleanly with vLLM. So we vibe-coded our own and are sharing it with the ... | [] |
g4me/QwenRolina3-Base-LR1e5-b64g8-uff | g4me | 2026-02-19T10:16:39Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:Qwen/Qwen3-1.7B-Base",
"base_model:finetune:Qwen/Qwen3-1.7B-Base",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-16T12:13:12Z | # Model Card for QwenRolina3-Base-LR1e5-b64g8-uff
This model is a fine-tuned version of [Qwen/Qwen3-1.7B-Base](https://huggingface.co/Qwen/Qwen3-1.7B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time m... | [] |
MuXodious/gemma-3n-E4B-it-absolute-heresy-MPOA-mlx-4Bit | MuXodious | 2026-01-20T01:08:37Z | 34 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3n",
"image-text-to-text",
"automatic-speech-recognition",
"automatic-speech-translation",
"audio-text-to-text",
"video-text-to-text",
"heretic",
"uncensored",
"decensored",
"abliterated",
"mlx",
"conversational",
"base_model:MuXodious/gemma-3n-E4B-it... | image-text-to-text | 2026-01-20T01:03:46Z | # MuXodious/gemma-3n-E4B-it-absolute-heresy-MPOA-mlx-4Bit
This model was converted to MLX format from [`MuXodious/gemma-3n-E4B-it-absolute-heresy-MPOA`](https://huggingface.co/MuXodious/gemma-3n-E4B-it-absolute-heresy-MPOA) using mlx-vlm version **0.3.10**.
Refer to the [original model card](https://huggingface.co/MuXo... | [] |
yophis/DRM-Llama-3.1-8B-cola | yophis | 2025-10-31T16:08:41Z | 0 | 0 | peft | [
"peft",
"safetensors",
"text-classification",
"dataset:nyu-mll/glue",
"arxiv:2505.23117",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"region:us"
] | text-classification | 2025-10-31T11:44:54Z | # DRM-Llama-3.1-8B-cola
This model is a fine-tuned version of `meta-llama/Llama-3.1-8B` trained on the CoLA (Corpus of Linguistic Acceptability) subset of the GLUE benchmark using LoRA.
This model is a part of the artifact release for the research paper: **Decom-Renorm-Merge: Model Merging on the Right Space Improves... | [] |
LDES777/ft_2_codestral_merged | LDES777 | 2025-12-03T19:04:16Z | 0 | 0 | vllm | [
"vllm",
"safetensors",
"mistral-common",
"license:apache-2.0",
"region:us"
] | null | 2025-12-03T18:59:33Z | # Model Card for Mamba-Codestral-7B-v0.1
Codestral Mamba is an open code model based on the Mamba2 architecture. It performs on par with state-of-the-art Transformer-based code models. \
You can read more in the [official blog post](https://mistral.ai/news/codestral-mamba/).
## Installation
It is recommended to use... | [] |
Intel/deepmath-v1 | Intel | 2025-12-08T15:54:38Z | 12 | 11 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"math",
"reasoning",
"agent",
"qwen",
"grpo",
"reinforcement-learning",
"conversational",
"en",
"dataset:nvidia/OpenMathReasoning",
"base_model:Qwen/Qwen3-4B-Thinking-2507",
"base_model:finetune:Qwen/Qwen3-4B-Thinking-2507",
"l... | text-generation | 2025-11-20T10:46:04Z | # DeepMath: A Lightweight Math Reasoning Agent
<img src="https://cdn-uploads.huggingface.co/production/uploads/62d93cd728f9c86a4031562e/ndb_WmPavW1MONAjsGpYT.jpeg" style="width:600px" alt="An LLM is using a calculator to answer questions." />
## Model Description
**DeepMath** is a 4B parameter mathematical reasoning... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.