modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
Regis-RCR/gemma-4-31B-it-oQ4 | Regis-RCR | 2026-04-17T06:51:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma4",
"image-text-to-text",
"conversational",
"license:apache-2.0",
"endpoints_compatible",
"4-bit",
"region:us"
] | image-text-to-text | 2026-04-17T06:51:33Z | <div align="center">
<img src=https://ai.google.dev/gemma/images/gemma4_banner.png>
</div>
<p align="center">
<a href="https://huggingface.co/collections/google/gemma-4" target="_blank">Hugging Face</a> |
<a href="https://github.com/google-gemma" target="_blank">GitHub</a> |
<a href="https://blog.google... | [] |
lelek214/lerobot_blue_bottle_5_policy | lelek214 | 2025-11-22T19:49:42Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:elchinaslanli/lerobot_blue_bottle_5",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-22T19:48:14Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
RikkaBotan/LFM2-350M-Cute-Friendly-Finetune-JP-GGUF | RikkaBotan | 2025-11-30T06:00:14Z | 15 | 0 | null | [
"gguf",
"text-generation",
"ja",
"dataset:RikkaBotan/Cute_Synthetic_smoltalk_jp_sft",
"base_model:RikkaBotan/LFM2-350M-Cute-Friendly-Finetune-JP",
"base_model:quantized:RikkaBotan/LFM2-350M-Cute-Friendly-Finetune-JP",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-11-26T13:45:18Z | # 🌸 LFM2–Friendly Japanese Fine-Tuned Model
*A warm, approachable, and soft-spoken conversational AI*
This repository provides a fine-tuned version of **LFM2 (Liquid Foundation Model v2)** designed to deliver **gentle, friendly, and natural Japanese conversations**.
The model has been trained to speak in a **soft, f... | [] |
kirihato/DeepSeek-R1-Distill-Qwen-7B-abliterated-v2-Q5_K_M-GGUF | kirihato | 2026-02-08T18:52:06Z | 129 | 0 | transformers | [
"transformers",
"gguf",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"base_model:huihui-ai/DeepSeek-R1-Distill-Qwen-7B-abliterated-v2",
"base_model:quantized:huihui-ai/DeepSeek-R1-Distill-Qwen-7B-abliterated-v2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-02-08T18:51:38Z | # kirihato/DeepSeek-R1-Distill-Qwen-7B-abliterated-v2-Q5_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/DeepSeek-R1-Distill-Qwen-7B-abliterated-v2`](https://huggingface.co/huihui-ai/DeepSeek-R1-Distill-Qwen-7B-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spa... | [] |
0Time/INCEPT-SH | 0Time | 2026-03-10T00:45:46Z | 41 | 2 | null | [
"gguf",
"linux",
"command-generation",
"qwen3",
"llama-cpp",
"offline",
"en",
"base_model:Qwen/Qwen3.5-0.8B",
"base_model:quantized:Qwen/Qwen3.5-0.8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-09T01:12:37Z | # INCEPT.sh
Offline command inference engine for Linux. Fine-tuned **Qwen3.5-0.8B** (GGUF Q8_0, 774MB) designed to run on low-resource and edge devices with no GPU, no API, and no internet connection required at runtime.
**Benchmark:** 99/100 on a structured 100-question Linux command evaluation (Ubuntu 22.04, bash, ... | [] |
mradermacher/ROLEPL-AI-v2-Qwen2.5-32B-i1-GGUF | mradermacher | 2025-12-23T04:29:23Z | 50 | 1 | transformers | [
"transformers",
"gguf",
"roleplay",
"en",
"base_model:Inceptive/ROLEPL-AI-v2-Qwen2.5-32B",
"base_model:quantized:Inceptive/ROLEPL-AI-v2-Qwen2.5-32B",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-30T13:11:13Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
csukuangfj/vits-piper-it_IT-dii-high | csukuangfj | 2025-12-02T06:54:46Z | 0 | 0 | null | [
"onnx",
"text-to-speech",
"it",
"dataset:Jarbas/tts-train-synthetic-dii_it-IT",
"base_model:OpenVoiceOS/pipertts_pt-PT_dii",
"base_model:quantized:OpenVoiceOS/pipertts_pt-PT_dii",
"region:us"
] | text-to-speech | 2025-08-25T22:57:29Z | See https://huggingface.co/OpenVoiceOS/pipertts_it-IT_dii
and https://github.com/OHF-Voice/piper1-gpl/discussions/27
# License
See also https://github.com/k2-fsa/sherpa-onnx/pull/2480
This model is licensed under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0... | [] |
winkin119/Rainbow-1d-LunarLander-v3-NoPer | winkin119 | 2025-08-12T19:20:31Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v3",
"rainbow-dqn (with uniform sampling)",
"reinforcement-learning",
"custom-implementation",
"deep-q-learning",
"pytorch",
"rainbow",
"dqn",
"model-index",
"region:us"
] | reinforcement-learning | 2025-08-12T19:19:52Z | # **Rainbow-DQN (with uniform sampling)** Agent playing **LunarLander-v3**
This is a trained model of a **Rainbow-DQN (with uniform sampling)** agent playing **LunarLander-v3**.
## Usage
### create the conda env in https://github.com/GeneHit/drl_practice
```bash
conda create -n drl python=3.12
conda activate drl
pytho... | [] |
InfurnusWolf/legal_summarizer | InfurnusWolf | 2026-03-10T18:45:59Z | 75 | 0 | transformers | [
"transformers",
"safetensors",
"led",
"text2text-generation",
"generated_from_trainer",
"base_model:allenai/led-base-16384",
"base_model:finetune:allenai/led-base-16384",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2026-03-10T18:45:37Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legal_summarizer
This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on... | [] |
pictgensupport/Art-of-Oceania-8070 | pictgensupport | 2025-11-03T13:21:34Z | 2 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-11-03T13:20:23Z | # Art Of Oceania 8070
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `art-of-oceania_0` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers impo... | [] |
mradermacher/Podkatik-v3-GGUF | mradermacher | 2025-08-19T16:16:41Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-19T16:02:39Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
broadfield-dev/bert-small-tuned-12260836 | broadfield-dev | 2025-12-26T07:36:58Z | 1 | 0 | null | [
"safetensors",
"bert",
"token_cls",
"generated_from_trainer",
"dataset:ai4privacy/pii-masking-400k",
"base_model:prajjwal1/bert-small",
"base_model:finetune:prajjwal1/bert-small",
"license:mit",
"region:us"
] | null | 2025-12-26T07:36:54Z | # bert-small-tuned-12260836
This model is a fine-tuned version of [prajjwal1/bert-small](https://huggingface.co/prajjwal1/bert-small) on the [ai4privacy/pii-masking-400k](https://huggingface.co/ai4privacy/pii-masking-400k) dataset.
## Training Details
- **Task:** TOKEN_CLS
- **Columns:** Input: source_text Output: p... | [] |
SebastianMerino/biogpt | SebastianMerino | 2026-04-04T21:36:32Z | 38 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"token-classification",
"generated_from_trainer",
"dataset:ncbi_disease",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | token-classification | 2026-04-03T14:43:21Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biogpt
This model was trained from scratch on the ncbi_disease dataset.
## Model description
More information needed
## Intend... | [] |
Healshsj/Qwen3-4B-Reasoning-Ultimate-6Model-Merge | Healshsj | 2026-03-26T10:10:07Z | 17 | 0 | null | [
"gguf",
"merge",
"dare-ties",
"qwen3",
"reasoning",
"code",
"experimental",
"base_model:Qwen/Qwen3-4B-Thinking-2507",
"base_model:quantized:Qwen/Qwen3-4B-Thinking-2507",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-25T18:20:31Z | # Qwen3-4B-Reasoning-Ultimate-6Model-Merge(UPDATED NOTE: DON'T DOWNLOAD THIS! THIS MODEL WAS HIGHLY EXPERIMENTAL AND A PART OF LEARNING, THIS MODEL FALLS IN AN ENDLESS REPITITION LOOP MOST OF THE TIMES!)
# Use my previous stable model instead at Healshsj/Qwen3.5-4B-Reasoning-Neo-DareTies, that model is MUCH MUCH better... | [] |
bartowski/MN-12B-Lyra-v4-GGUF | bartowski | 2024-09-09T16:20:40Z | 828 | 15 | null | [
"gguf",
"text-generation",
"en",
"base_model:Sao10K/MN-12B-Lyra-v4",
"base_model:quantized:Sao10K/MN-12B-Lyra-v4",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-09-09T10:04:37Z | ## Llamacpp imatrix Quantizations of MN-12B-Lyra-v4
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3658">b3658</a> for quantization.
Original model: https://huggingface.co/Sao10K/MN-12B-Lyra-v4
All quants made using imatrix ... | [] |
AIWizards/MultiPRIDE-DualEncoder-MainStage-FT-es | AIWizards | 2025-12-30T13:41:55Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:cardiffnlp/twitter-xlm-roberta-base-hate-spanish",
"base_model:finetune:cardiffnlp/twitter-xlm-roberta-base-hate-spanish",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-12-30T13:15:05Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MultiPRIDE-DualEncoder-MainStage-FT-es
This model is a fine-tuned version of [cardiffnlp/twitter-xlm-roberta-base-hate-spanish](h... | [] |
mradermacher/Magnolia-v3-medis-remix-12B-GGUF | mradermacher | 2025-09-17T10:00:47Z | 3 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:grimjim/Magnolia-v3-medis-remix-12B",
"base_model:quantized:grimjim/Magnolia-v3-medis-remix-12B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T07:13:28Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
npranalyse/ohana-7b-v1-final | npranalyse | 2026-04-05T08:11:23Z | 0 | 0 | null | [
"gguf",
"qwen2",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us"
] | null | 2026-04-05T08:10:16Z | # ohana-7b-v1-final : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `llama-cli -hf npranalyse/ohana-7b-v1-final --jinja`
- For multimodal models: `llama-mtmd-cli -hf npranalyse/ohana-7b-v1-final --jinja`
#... | [
{
"start": 89,
"end": 96,
"text": "Unsloth",
"label": "training method",
"score": 0.7988499402999878
},
{
"start": 127,
"end": 134,
"text": "unsloth",
"label": "training method",
"score": 0.8341360092163086
},
{
"start": 405,
"end": 412,
"text": "Unsloth",... |
mradermacher/nope-edge-mini-GGUF | mradermacher | 2026-02-23T22:06:27Z | 144 | 0 | transformers | [
"transformers",
"gguf",
"safety",
"crisis-detection",
"text-classification",
"mental-health",
"content-safety",
"suicide-prevention",
"en",
"base_model:nopenet/nope-edge-mini",
"base_model:quantized:nopenet/nope-edge-mini",
"license:other",
"endpoints_compatible",
"region:us",
"conversat... | text-classification | 2026-02-23T21:54:59Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
Ican-9107/cifar10-image-classifier | Ican-9107 | 2026-02-23T05:51:29Z | 0 | 0 | pytorch | [
"pytorch",
"image-classification",
"cifar10",
"region:us"
] | image-classification | 2026-02-23T05:50:53Z | # CIFAR-10 Image Classifier
This model classifies images into 10 categories using a Convolutional Neural Network (CNN).
## Model Description
- **Architecture**: Custom CNN with 3 convolutional blocks
- **Dataset**: CIFAR-10 (60,000 32x32 color images)
- **Classes**: airplane, car, bird, cat, deer, dog, frog, horse, ... | [] |
sizzlebop/Toucan-Qwen2.5-7B-Instruct-v0.1-Q8_0-GGUF | sizzlebop | 2025-10-05T05:08:29Z | 2 | 1 | null | [
"gguf",
"agent",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:Agent-Ark/Toucan-1.5M",
"base_model:Agent-Ark/Toucan-Qwen2.5-7B-Instruct-v0.1",
"base_model:quantized:Agent-Ark/Toucan-Qwen2.5-7B-Instruct-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-10-05T05:07:53Z | # sizzlebop/Toucan-Qwen2.5-7B-Instruct-v0.1-Q8_0-GGUF
This model was converted to GGUF format from [`Agent-Ark/Toucan-Qwen2.5-7B-Instruct-v0.1`](https://huggingface.co/Agent-Ark/Toucan-Qwen2.5-7B-Instruct-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
... | [] |
mradermacher/tts-yt-multi-za-v1-GGUF | mradermacher | 2025-11-05T12:08:43Z | 50 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:JobixAi/tts-yt-multi-za-v1",
"base_model:quantized:JobixAi/tts-yt-multi-za-v1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-11-05T11:50:02Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
mradermacher/Arctic-AWM-8B-i1-GGUF | mradermacher | 2026-02-12T11:39:00Z | 61 | 0 | transformers | [
"transformers",
"gguf",
"agent",
"tool-use",
"reinforcement-learning",
"mcp",
"en",
"base_model:Snowflake/Arctic-AWM-8B",
"base_model:quantized:Snowflake/Arctic-AWM-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | reinforcement-learning | 2026-02-12T08:21:17Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
EmanuelOverride/wise-llama-Q4_K_M-GGUF | EmanuelOverride | 2025-10-11T13:25:02Z | 0 | 0 | null | [
"gguf",
"llama",
"instruct",
"values",
"ethics",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:meaningalignment/wise-data",
"dataset:meaningalignment/wise-data-preferences",
"base_model:meaningalignment/wise-llama",
"base_model:quantized:meaningalignment/wise-llama",
"license:mit",
"endpoint... | null | 2025-10-11T13:24:40Z | # EmanuelOverride/wise-llama-Q4_K_M-GGUF
This model was converted to GGUF format from [`meaningalignment/wise-llama`](https://huggingface.co/meaningalignment/wise-llama) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https:... | [] |
mradermacher/FAPO-GenRM-4B-GGUF | mradermacher | 2025-10-25T04:42:02Z | 65 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:dyyyyyyyy/FAPO-GenRM-4B",
"base_model:quantized:dyyyyyyyy/FAPO-GenRM-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-10-25T04:02:16Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
rujutashashikanjoshi/yolo26-vehicle-detection-4139_full_v1.10-100m | rujutashashikanjoshi | 2026-03-03T23:27:43Z | 59 | 0 | ultralytics | [
"ultralytics",
"medium",
"object-detection",
"YOLO26",
"computer-vision",
"license:agpl-3.0",
"region:us"
] | object-detection | 2026-03-03T23:26:23Z | # YOLO26 MEDIUM Model
Fine-tuned YOLO26 model for object detection.
## Model Details
- **Architecture**: YOLO26Medium
- **Framework**: Ultralytics YOLO26
- **Resolution**: 640x640
- **Epochs**: 100
- **Batch Size**: 8
## Classes
`car`, `truck`
## Usage
```python
from ultralytics import YOLO
from huggingface_hub ... | [] |
mark-22/qwen3-4b-agent-trajectory-lora_high_LR | mark-22 | 2026-03-01T22:43:45Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"lora",
"agent",
"multi-task",
"alfworld",
"dbbench",
"unsloth",
"text-generation",
"conversational",
"en",
"dataset:mark-22/alfworld_cleaned_for_agentbench_v4",
"dataset:mark-22/dbbench_cleaned_for_agentbench",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
... | text-generation | 2026-03-01T19:23:12Z | # Qwen3-4B Dual-Skill Agent (ALFWorld & DBBench) LoRA
This repository provides a **Dual-Skill LoRA adapter** fine-tuned from **Qwen/Qwen3-4B-Instruct-2507**.
It is specifically optimized for two distinct agentic tasks: **Household operations (ALFWorld)** and **Database interactions (DBBench)**.
## Key Improvements & ... | [
{
"start": 29,
"end": 37,
"text": "ALFWorld",
"label": "training method",
"score": 0.7787373661994934
},
{
"start": 49,
"end": 53,
"text": "LoRA",
"label": "training method",
"score": 0.7242356538772583
},
{
"start": 244,
"end": 252,
"text": "ALFWorld",
... |
hypaai/Hypa-Gemma-4-E2B-it-audio-2026-04-14_LoRAs | hypaai | 2026-04-14T22:44:16Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"base_model:unsloth/gemma-4-E2B-it",
"base_model:finetune:unsloth/gemma-4-E2B-it",
"endpoints_compatible",
"region:us"
] | null | 2026-04-14T13:17:32Z | # Model Card for Hypa-Gemma-4-E2B-it-audio-2026-04-14_LoRAs
This model is a fine-tuned version of [unsloth/gemma-4-E2B-it](https://huggingface.co/unsloth/gemma-4-E2B-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If yo... | [] |
Gidigi/gidigi_a9f4dd0d_0008 | Gidigi | 2026-02-22T01:02:58Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-02-22T01:02:45Z | # SentenceTransformer based on Qwen/Qwen3-Embedding-0.6B
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) on the [ssf-train-valid-full-synthetic-v3](https://huggingface.co/datasets/frankwong2001/ssf-train-valid-f... | [] |
mradermacher/Wicked-Nebula-12B-i1-GGUF | mradermacher | 2026-03-20T16:49:16Z | 2,128 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"roleplay",
"en",
"base_model:Vortex5/Wicked-Nebula-12B",
"base_model:quantized:Vortex5/Wicked-Nebula-12B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-03-20T15:40:21Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
rjohal164/coachmode-llama32-1b-vbmi-q4km | rjohal164 | 2026-03-03T01:16:41Z | 73 | 0 | null | [
"gguf",
"llama",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-03T01:16:04Z | # coachmode-llama32-1b-vbmi-q4km : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `./llama.cpp/llama-cli -hf rjohal164/coachmode-llama32-1b-vbmi-q4km --jinja`
- For multimodal models: `./llama.cpp/llama-mtmd... | [
{
"start": 35,
"end": 39,
"text": "GGUF",
"label": "training method",
"score": 0.7055253982543945
}
] |
HKUST-DSAIL/Graph-R1-7B | HKUST-DSAIL | 2025-08-04T12:20:46Z | 1 | 3 | null | [
"safetensors",
"qwen2",
"base_model:Qwen/Qwen2.5-7B-Instruct-1M",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct-1M",
"license:mit",
"region:us"
] | null | 2025-08-04T11:50:50Z | ### Model Card: Graph-R1 Series
This model card covers the Graph-R1 series of models, including the final released versions and variants used in ablation studies. All information is based on the provided research paper.
#### **Model Details**
* **Model Developer**: HKUST-DSAIL
* **Model Series**: Graph-R1
* **Model ... | [] |
anquachdev/FoodExtract-gemma-3-270m-fine-tune-v1 | anquachdev | 2026-03-31T15:21:48Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"arxiv:2506.14111",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"license:gemma",
"text-generation-inference",
"endpoints_compati... | text-generation | 2026-03-31T15:21:04Z | # FoodExtract-v1
This is a food and drink extraction language model built on [Gemma 3 270M](https://huggingface.co/google/gemma-3-270m-it).
Given raw text, it's designed to:
1. Classify the text into food or drink (e.g. "a photo of a dog" = not food or drink, "a photo of a pizza" = food or drink).
2. Tag the text wi... | [] |
permain36/humanoid-kasur-model | permain36 | 2026-01-03T18:35:15Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-03T18:34:20Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# humanoid-kasur-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.... | [] |
z-lab/Qwen3-8B-DFlash-b16 | z-lab | 2026-04-07T14:26:36Z | 10,142 | 20 | transformers | [
"transformers",
"safetensors",
"qwen3",
"feature-extraction",
"dflash",
"speculative-decoding",
"diffusion",
"efficiency",
"flash-decoding",
"qwen",
"diffusion-language-model",
"text-generation",
"custom_code",
"arxiv:2602.06036",
"license:mit",
"text-generation-inference",
"endpoint... | text-generation | 2026-01-04T13:05:24Z | # Qwen3-8B-DFlash-b16
[**Paper**](https://arxiv.org/abs/2602.06036) | [**GitHub**](https://github.com/z-lab/dflash) | [**Blog**](https://z-lab.ai/projects/dflash/)
**DFlash** is a novel speculative decoding method that utilizes a lightweight **block diffusion** model for drafting. It enables efficient, high-quality pa... | [] |
caiyuchen/DAPO-step-13 | caiyuchen | 2025-10-03T12:42:27Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"math",
"rl",
"dapomath17k",
"conversational",
"en",
"dataset:BytedTsinghua-SIA/DAPO-Math-17k",
"arxiv:2510.00553",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:finetune:Qwen/Qwen3-8B-Base",
"license:apache-2.0",
"text-generation... | text-generation | 2025-10-03T04:05:07Z | ---
license: apache-2.0
tags:
- math
- rl
- qwen3
- dapomath17k
library_name: transformers
pipeline_tag: text-generation
language: en
datasets:
- BytedTsinghua-SIA/DAPO-Math-17k
base_model:
- Qwen/Qwen3-8B-Base
---
# On Predictability of Reinforcement Learning Dynamics for Large Language Models
.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
... | [
{
"start": 217,
"end": 220,
"text": "TRL",
"label": "training method",
"score": 0.7760289907455444
},
{
"start": 991,
"end": 994,
"text": "DPO",
"label": "training method",
"score": 0.8010828495025635
},
{
"start": 1281,
"end": 1284,
"text": "DPO",
"la... |
wikilangs/tk | wikilangs | 2026-01-11T01:05:23Z | 0 | 0 | wikilangs | [
"wikilangs",
"nlp",
"tokenizer",
"embeddings",
"n-gram",
"markov",
"wikipedia",
"feature-extraction",
"sentence-similarity",
"tokenization",
"n-grams",
"markov-chain",
"text-mining",
"fasttext",
"babelvec",
"vocabulous",
"vocabulary",
"monolingual",
"family-turkic_oghuz",
"text... | text-generation | 2026-01-11T01:05:05Z | # Turkmen - Wikilangs Models
## Comprehensive Research Report & Full Ablation Study
This repository contains NLP models trained and evaluated by Wikilangs, specifically on **Turkmen** Wikipedia data.
We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and word embeddings.
## 📋 Repository Cont... | [
{
"start": 1294,
"end": 1315,
"text": "Tokenizer Compression",
"label": "training method",
"score": 0.7101609706878662
}
] |
AOkopie/Anneokopie-replicatedemo | AOkopie | 2025-09-13T22:31:14Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-09-07T22:47:44Z | # Anneokopie Replicatedemo
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux... | [] |
jacobcarajo/Qwen3-30B-A3B-Thinking-2507-Q5_K_M-GGUF | jacobcarajo | 2025-09-04T10:31:17Z | 87 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Qwen/Qwen3-30B-A3B-Thinking-2507",
"base_model:quantized:Qwen/Qwen3-30B-A3B-Thinking-2507",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-09-04T10:29:40Z | # jacobcarajo/Qwen3-30B-A3B-Thinking-2507-Q5_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3-30B-A3B-Thinking-2507`](https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [orig... | [] |
AmitTiparadi/incident-commander-qwen35-9b-pretrain | AmitTiparadi | 2026-04-25T15:05:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"sft",
"hf_jobs",
"trl",
"base_model:Qwen/Qwen3.5-9B",
"base_model:finetune:Qwen/Qwen3.5-9B",
"endpoints_compatible",
"region:us"
] | null | 2026-04-25T11:15:08Z | # Model Card for incident-commander-qwen35-9b-pretrain
This model is a fine-tuned version of [Qwen/Qwen3.5-9B](https://huggingface.co/Qwen/Qwen3.5-9B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machin... | [] |
pejmantheory/bleu-xgboost-classifier | pejmantheory | 2026-03-14T09:28:23Z | 0 | 0 | null | [
"machine-learning",
"xgboost",
"quantum-enhanced",
"bleu-js",
"classification",
"gradient-boosting",
"dataset:custom",
"license:mit",
"model-index",
"region:us"
] | null | 2025-11-24T00:16:13Z | # Bleu.js XGBoost Classifier
## Model Description
This is an XGBoost classification model from the Bleu.js quantum-enhanced AI platform. The model combines classical gradient boosting with quantum computing capabilities for improved performance and feature extraction.
## Model Details
### Model Type
- **Architectur... | [] |
Muapi/realistic-comicbook-style-flux | Muapi | 2025-09-05T04:57:22Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-09-05T04:56:58Z | # Realistic Comicbook Style FLUX

**Base model**: Flux.1 D
**Trained words**: mad-rlcmc, flatcolor, comic, illustration
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v... | [] |
medicai-sp/medicai-E4B-beta2-GGUF | medicai-sp | 2026-05-04T12:15:30Z | 0 | 0 | null | [
"gguf",
"gemma4",
"llama.cpp",
"unsloth",
"vision-language-model",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-05-04T12:14:19Z | # medicai-E4B-beta2-GGUF : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `llama-cli -hf medicai-sp/medicai-E4B-beta2-GGUF --jinja`
- For multimodal models: `llama-mtmd-cli -hf medicai-sp/medicai-E4B-beta2-G... | [
{
"start": 94,
"end": 101,
"text": "Unsloth",
"label": "training method",
"score": 0.7422645092010498
},
{
"start": 132,
"end": 139,
"text": "unsloth",
"label": "training method",
"score": 0.8201772570610046
},
{
"start": 517,
"end": 524,
"text": "unsloth"... |
mradermacher/Suri-Qwen-3.5-9B-Uncensored-Soft-GGUF | mradermacher | 2026-03-22T13:05:25Z | 750 | 0 | transformers | [
"transformers",
"gguf",
"suri",
"qwen",
"9B",
"uncensored",
"unaligned",
"text-generation-inference",
"qwen3.5",
"zh",
"en",
"dataset:SpaceTimee/Suri-Dataset-2.0-Toxic-DPO-ZH-Thinking",
"dataset:SpaceTimee/Suri-Dataset-2.0-Toxic-DPO-EN-Thinking",
"base_model:SpaceTimee/Suri-Qwen-3.5-9B-Unc... | null | 2026-03-22T09:23:02Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
Scrybl/whisper-base | Scrybl | 2026-04-29T01:18:47Z | 0 | 0 | scrybl | [
"scrybl",
"onnx",
"mirror",
"license:other",
"region:us"
] | null | 2026-04-25T17:47:34Z | # Whisper base (multilingual, fp16 ONNX)
Scrybl-hosted mirror of [`onnx-community/whisper-base`](https://huggingface.co/onnx-community/whisper-base).
## Why this mirror exists
[Scrybl](https://scrybl.xyz) re-hosts every model it auto-downloads under the
`Scrybl` HuggingFace org so first-run installs cannot be rugged... | [] |
Fooping/act_so101_grab-color-ball01 | Fooping | 2025-09-27T09:56:50Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:Fooping/grab-color-ball",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-09-27T09:56:38Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
thedeba/Friday-lora | thedeba | 2025-08-22T08:32:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"base_model:thedeba/debai-8b",
"base_model:finetune:thedeba/debai-8b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-22T08:31:51Z | # Model Card for Friday-lora
This model is a fine-tuned version of [thedeba/debai-8b](https://huggingface.co/thedeba/debai-8b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to ... | [] |
mradermacher/llama70B-3.1-40layer-GGUF | mradermacher | 2025-09-22T05:00:11Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:japawblob/llama70B-3.1-40layer",
"base_model:quantized:japawblob/llama70B-3.1-40layer",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-22T00:28:15Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
amhinson/strudel-coder-0.5B-ONNX | amhinson | 2026-02-07T20:19:01Z | 4 | 0 | transformers.js | [
"transformers.js",
"onnx",
"qwen2",
"text-generation",
"strudel",
"live-coding",
"music",
"conversational",
"region:us"
] | text-generation | 2026-02-07T19:36:36Z | # strudel-coder-0.5B-ONNX
ONNX export of [amhinson/strudel-coder-0.5B](https://huggingface.co/amhinson/strudel-coder-0.5B) for use with
[transformers.js](https://huggingface.co/docs/transformers.js).
This is a fine-tuned Qwen2.5-Coder-0.5B-Instruct model specialized for
**Strudel REPL** live coding — a browser-based ... | [] |
safe-autonomous-systems/ma-sac-Airfoil3D-easy-v0 | safe-autonomous-systems | 2026-02-04T08:54:57Z | 5 | 0 | stable-baselines3 | [
"stable-baselines3",
"reinforcement-learning",
"deep-reinforcement-learning",
"fluidgym",
"active-flow-control",
"fluid-dynamics",
"simulation",
"Airfoil3D-easy-v0",
"arxiv:2601.15015",
"model-index",
"region:us"
] | reinforcement-learning | 2026-01-27T09:37:47Z | # SAC on Airfoil3D-easy-v0 (FluidGym)
This repository is part of the **FluidGym** benchmark results. It contains trained Stable Baselines3 agents for the specialized **Airfoil3D-easy-v0** environment.
## Evaluation Results
### Global Performance (Aggregated across 3 seeds)
**Mean Reward:** 1.59 ± 0.01
### Per-Seed ... | [] |
arianaazarbal/qwen3-4b-20260120_020533_lc_rh_sot_base_seed65_beta0.01-821a71-step40 | arianaazarbal | 2026-01-20T02:45:01Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-01-20T02:44:21Z | # qwen3-4b-20260120_020533_lc_rh_sot_base_seed65_beta0.01-821a71-step40
## Experiment Info
- **Full Experiment Name**: `20260120_020533_leetcode_train_medhard_filtered_rh_simple_overwrite_tests_baseline_seed65_beta0.01`
- **Short Name**: `20260120_020533_lc_rh_sot_base_seed65_beta0.01-821a71`
- **Base Model**: `qwen/Q... | [] |
Jack-Payne1/qwen2-5-14b-instruct-bad-doctor-4_bit_trained-seed2 | Jack-Payne1 | 2025-08-27T21:38:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"unsloth",
"sft",
"endpoints_compatible",
"region:us"
] | null | 2025-08-27T21:14:15Z | # Model Card for qwen2-5-14b-instruct-bad-doctor-4_bit_trained-seed2
This model is a fine-tuned version of [unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```pyth... | [] |
ctranslate2-4you/whisper-distil-large-v3.5-ct2-float32 | ctranslate2-4you | 2026-03-21T11:04:26Z | 14 | 0 | transformers | [
"transformers",
"audio",
"automatic-speech-recognition",
"en",
"arxiv:2311.00430",
"arxiv:2106.05237",
"arxiv:1904.08779",
"arxiv:1910.13267",
"license:mit",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2026-03-21T11:03:21Z | # Distil-Whisper: Distil-Large-v3.5
Distil-Whisper is the knowledge-distilled version of OpenAI's [Whisper-Large-v3](https://huggingface.co/openai/whisper-large-v3), described in the paper [Robust Knowledge Distillation via Large-Scale Pseudo Labelling](https://arxiv.org/abs/2311.00430). As the newest addition to the ... | [] |
deyucao/qwen3-4b-agent-trajectory-lora_2026021902 | deyucao | 2026-02-19T01:23:57Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:u-10bei/dbbench_sft_dataset_react_v4",
"dataset:u-10bei/dbbench_sft_dataset_react_v3",
"dataset:u-10bei/dbbench_sft_dataset_react_v2",
"base_model:Qwen/... | text-generation | 2026-02-19T01:22:13Z | # qwen3-4b-agent-trajectory-lora_2026021902
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve... | [
{
"start": 74,
"end": 78,
"text": "LoRA",
"label": "training method",
"score": 0.9104706048965454
},
{
"start": 145,
"end": 149,
"text": "LoRA",
"label": "training method",
"score": 0.927240252494812
},
{
"start": 191,
"end": 195,
"text": "LoRA",
"labe... |
abdeljalilELmajjodi/Darija_Arabic_NER_LID_3 | abdeljalilELmajjodi | 2025-08-12T18:56:46Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:atlasia/XLM-RoBERTa-Morocco",
"base_model:finetune:atlasia/XLM-RoBERTa-Morocco",
"license:mit",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-08-12T18:54:46Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Darija_Arabic_NER_LID_3
This model is a fine-tuned version of [atlasia/XLM-RoBERTa-Morocco](https://huggingface.co/atlasia/XLM-Ro... | [] |
mradermacher/GeoVista-SFT-7B-i1-GGUF | mradermacher | 2025-12-06T03:47:27Z | 21 | 2 | transformers | [
"transformers",
"gguf",
"en",
"base_model:LibraTree/GeoVista-SFT-7B",
"base_model:quantized:LibraTree/GeoVista-SFT-7B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-11-17T23:11:15Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
tchakra1/Qwen2.5-7B-Instruct | tchakra1 | 2026-03-11T18:12:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"trackio:https://tchakra1-Qwen2.5-7B-Instruct.hf.space?project=huggingface&runs=tchakra1-1773246507&sidebar=collapsed",
"trackio",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"endpoint... | null | 2026-03-11T16:25:37Z | # Model Card for Qwen2.5-7B-Instruct
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machin... | [] |
buelfhood/progpedia19_codet5_ep30_bs16_lr3e-05_l512_s42_ppn_loss | buelfhood | 2025-11-17T08:09:15Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"base_model:Salesforce/codet5-small",
"base_model:finetune:Salesforce/codet5-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-17T08:08:48Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# progpedia19_codet5_ep30_bs16_lr3e-05_l512_s42_ppn_loss
This model is a fine-tuned version of [Salesforce/codet5-small](https://hu... | [] |
phanerozoic/threshold-atmost1outof3 | phanerozoic | 2026-01-23T22:58:17Z | 0 | 0 | null | [
"safetensors",
"pytorch",
"threshold-logic",
"neuromorphic",
"license:mit",
"region:us"
] | null | 2026-01-23T22:58:18Z | # threshold-atmost1outof3
At most 1 of 3 inputs high.
## Function
atmost1outof3(a, b, c) = 1 if (a + b + c) <= 1, else 0
## Truth Table
| a | b | c | sum | out |
|---|---|---|-----|-----|
| 0 | 0 | 0 | 0 | 1 |
| 0 | 0 | 1 | 1 | 1 |
| 0 | 1 | 0 | 1 | 1 |
| 0 | 1 | 1 | 2 | 0 |
| 1 | 0 | 0 | 1 | 1 |
| 1 | 0 | 1 | 2 |... | [] |
microsoft/Orca-2-7b | microsoft | 2023-11-22T17:56:12Z | 1,202 | 224 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"orca",
"orca2",
"microsoft",
"arxiv:2311.11045",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-11-14T01:12:18Z | # Orca 2
<!-- Provide a quick summary of what the model is/does. -->
Orca 2 is built for research purposes only and provides a single turn response in tasks such as reasoning over user given data, reading comprehension, math problem solving and text summarization. The model is designed to excel particularly in reason... | [] |
ArtusDev/TheDrummer_Magidonia-24B-v4.2.0-EXL3 | ArtusDev | 2025-10-13T00:28:03Z | 5 | 2 | null | [
"exl3",
"base_model:TheDrummer/Magidonia-24B-v4.2.0",
"base_model:quantized:TheDrummer/Magidonia-24B-v4.2.0",
"region:us"
] | null | 2025-10-12T21:07:41Z | <style>
.container-dark {
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif;
line-height: 1.6;
color: #d4d4d4;
}
a {
color: #569cd6;
text-decoration: none;
font-weight: 600;
}
a:hover {
text-decoration: underline;
}
.card-da... | [] |
TheDenk/wan2.2-t2v-a14b-controlnet-hed-v1 | TheDenk | 2025-10-30T13:56:58Z | 53 | 4 | diffusers | [
"diffusers",
"safetensors",
"video",
"video-generation",
"video-to-video",
"controlnet",
"wan2.2",
"en",
"license:apache-2.0",
"region:us"
] | video-to-video | 2025-08-08T15:36:11Z | # Controlnet for Wan2.2 A14B (hed)
This repo contains the code for controlnet module for Wan2.2. See <a href="https://github.com/TheDenk/wan2.2-controlnet">Github code</a>.
Same approach as controlnet for [Wan2.1](https://github.com/TheDenk/wan2.1-dilated-controlnet).
<video controls autoplay src="https://cdn-up... | [] |
mingyi456/Chroma1-Flash-DF11 | mingyi456 | 2026-02-02T16:52:49Z | 7 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"en",
"base_model:lodestones/Chroma1-Flash",
"base_model:quantized:lodestones/Chroma1-Flash",
"license:apache-2.0",
"region:us"
] | text-to-image | 2025-09-20T14:11:59Z | ## Update: I have uploaded an updated version of this model, that should further reduce disk size and VRAM usage by ~82 MB. This is because I missed out on compressing a small portion of the model (the `distilled_guidance_layer.layers`) in my original upload. There is <u>no need to download again</u> if you are not hav... | [] |
NeuralTrustBank/jina-embeddings-v2-base-en | NeuralTrustBank | 2026-01-21T11:03:55Z | 8 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"coreml",
"onnx",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"custom_code",
"en",
"dataset:allenai/c4",
"arxiv:2108.12409",
"arxiv:2310.19923",
"license:apache-2.0",
"model-index",
"text-embeddings-inference",
"re... | feature-extraction | 2026-01-21T11:03:26Z | <!-- TODO: add evaluation results here -->
<br><br>
<p align="center">
<img src="https://huggingface.co/datasets/jinaai/documentation-images/resolve/main/logo.webp" alt="Jina AI: Your Search Foundation, Supercharged!" width="150px">
</p>
<p align="center">
<b>The text embedding set trained by <a href="https://jina.a... | [
{
"start": 1243,
"end": 1248,
"text": "ALiBi",
"label": "training method",
"score": 0.7115198373794556
}
] |
zeinab-403/my-finetuned-bert2_next | zeinab-403 | 2026-03-07T08:56:32Z | 24 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-03-07T08:54:42Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-finetuned-bert2_next
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an... | [] |
NextGenCoder/YOLOv11n-face-detection | NextGenCoder | 2026-04-06T17:17:34Z | 0 | 0 | null | [
"onnx",
"pytorch",
"object-detection",
"base_model:Ultralytics/YOLO11",
"base_model:quantized:Ultralytics/YOLO11",
"license:apache-2.0",
"region:us"
] | object-detection | 2026-04-06T17:17:33Z | ## YOLOv11n-Face-Detection
A lightweight face detection model based on YOLO architecture ([YOLOv11 nano](https://huggingface.co/Ultralytics/YOLO11)), trained for 225 epochs on the WIDERFACE dataset.
It achieves the following results on the evaluation set:
```
==================== Results ====================
Easy ... | [] |
rosspeili/BrainGemma3D | rosspeili | 2026-04-29T09:52:44Z | 0 | 0 | null | [
"safetensors",
"multimodal",
"vision-language",
"medical",
"neuroradiology",
"brain-mri",
"report-generation",
"3d-vision",
"medgemma",
"medsiglip",
"en",
"dataset:BraTS2020",
"dataset:TextBraTS2021",
"dataset:MPI-Leipzig_Mind-Brain-Body",
"base_model:google/medgemma-1.5-4b-it",
"base_... | null | 2026-04-29T09:52:44Z | # 🧠 BrainGemma3D — Brain Report Automation via Inflated Vision Transformers in 3D
BrainGemma3D is a **multimodal vision-language model** that generates clinically accurate radiology reports directly from **native 3D brain MRI** volumes. Unlike 2D slice-based approaches, BrainGemma3D processes MRI scans volumetrically... | [] |
aixk/ssai-stack-0_05B-gguf | aixk | 2026-04-10T09:57:20Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2026-04-10T05:32:24Z | <div align="center">
<img src="https://cdn.jsdelivr.net/gh/sllkx/icons@main/logo/isai2.png" alt="ISAI Logo" width="160" style="border-radius: 30px; box-shadow: 0 4px 12px rgba(0,0,0,0.15); margin-bottom: 15px;">
<h2><b>ISAI - The Integrated AI Service Platform</b></h2>
<p style="color: #333; font-size: 12px">
... | [] |
ranjan56cse/gpad_v3_main-dryrun | ranjan56cse | 2025-11-02T09:52:14Z | 0 | 0 | null | [
"safetensors",
"roberta",
"generated_from_trainer",
"region:us"
] | null | 2025-11-02T09:50:36Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpad_v3_main-dryrun
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model descriptio... | [] |
willmakeit24/pick_place_real_640 | willmakeit24 | 2026-03-21T09:49:16Z | 35 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:willmakeit24/pick_place_30_episodes_real_v30_640x480",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-21T09:49:04Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
cyber-pal-security/CyberOss-2.0-20B | cyber-pal-security | 2026-04-01T11:51:07Z | 0 | 0 | null | [
"safetensors",
"gpt_oss",
"cybersecurity",
"security",
"threat-intelligence",
"soc",
"incident-response",
"vulnerability-management",
"cwe",
"cve",
"mitre-attck",
"instruction-tuning",
"chain-of-thought",
"text-generation",
"conversational",
"en",
"arxiv:2510.14113",
"license:apach... | text-generation | 2026-04-01T11:50:16Z | # CyberPal-2.0-20B
CyberPal-2.0-20B is a cybersecurity-expert **20B-parameter** Small Language Model (SLM) fine-tuned for security operations and threat-management workflows (e.g., CTI Q&A, vulnerability-to-weakness mapping, detection/mitigation recommendations). It is part of the **CyberPal 2.0** model family (4B–20B... | [
{
"start": 343,
"end": 359,
"text": "SecKnowledge 2.0",
"label": "training method",
"score": 0.874049961566925
},
{
"start": 812,
"end": 828,
"text": "SecKnowledge 2.0",
"label": "training method",
"score": 0.857535719871521
}
] |
Sam3000/OUTPUT_DIR | Sam3000 | 2026-02-08T16:00:22Z | 3 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"pyannet",
"speaker-diarization",
"speaker-segmentation",
"bangla",
"bengali",
"pyannote",
"audio",
"generated_from_trainer",
"bn",
"dataset:Sam3000/speaker-diarization-dataset-bangla",
"base_model:pyannote/speaker-diarization-3.1",
"base_mod... | null | 2026-02-08T07:13:54Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bangla-segment
This model is a fine-tuned version of [pyannote/speaker-diarization-3.1](https://huggingface.co/pyannote/spea... | [] |
DevQuasar/huihui-ai.Huihui-Qwen3.5-27B-abliterated-GGUF | DevQuasar | 2026-03-04T07:08:53Z | 4,166 | 1 | null | [
"gguf",
"text-generation",
"base_model:huihui-ai/Huihui-Qwen3.5-27B-abliterated",
"base_model:quantized:huihui-ai/Huihui-Qwen3.5-27B-abliterated",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2026-03-04T04:47:25Z | [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [huihui-ai/Huihui-Qwen3.5-27B-abliterated](https://huggingface.co/huihui-ai/Huihui-Qwen3.5-27B-abliterated)
'Make knowledge free for everyone'
<p alig... | [] |
AlekseyCalvin/Marionette_Modernism_Z-image-Turbo_LoRA | AlekseyCalvin | 2025-12-06T00:28:44Z | 13 | 4 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"PEFT",
"photo",
"Dadaism",
"Constructivism",
"Futurism",
"Cubism",
"illustration",
"experimental",
"base_model:Tongyi-MAI/Z-Image-Turbo",
"base_model:adapter:Tongyi-MAI/Z-Image-Turbo",
"license:apache-2.0",
"region:us"
] | text-to-image | 2025-12-03T10:39:18Z | ## Marionette Modernism after Aleksandra Ekster & Sophie Taeuber-Arp
# aka DADADOLL over Z.I.T.
**Z-Image Turbo Low Rank Adapter by Silver Age Poets**<br>
<Gallery />
## Trigger words
You should use `dadadoll style photo` or `dadadoll style photo of a Constructivist living doll crafted by Ekster` or 'dadadoll style ... | [] |
mat31my/niebla | mat31my | 2026-04-17T17:55:33Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2026-04-17T17:54:26Z | ### Niebla en Stable Diffusion via Dreambooth
#### Modelo creado por mat31my
Este es el modelo Stable Diffusion ajustado con el concepto "Niebla" mediante Dreambooth.
Puedes usarlo modificando el `instance_prompt`: **sks GATA**
También puedes entrenar tus propios conceptos y subirlos a la biblioteca usando [este noteb... | [] |
yasserrmd/pharma-gemma-300m-emb | yasserrmd | 2025-09-14T14:54:29Z | 2 | 1 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"gemma3_text",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:20000",
"loss:MultipleNegativesRankingLoss",
"dataset:miriad/miriad-4.4M",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:google/embedding... | sentence-similarity | 2025-09-14T12:03:31Z | # SentenceTransformer based on google/embeddinggemma-300m
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic te... | [] |
ai-literacy-innovation-institute/islamic_nusantara_nlp_v02 | ai-literacy-innovation-institute | 2026-04-22T07:59:29Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen3_5",
"image-text-to-text",
"islamic-studies",
"nusantara",
"nlp",
"unsloth",
"vision-language-model",
"turath",
"history",
"id",
"ar",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2026-04-22T07:52:45Z | # 🕌 Islamic Nusantara NLP v2.0 (Master Edition)
**Islamic Nusantara NLP v2.0** adalah model bahasa khusus (LLM/VLM) yang dikembangkan oleh **Artificial Intelligence Literacy and Innovation Institute (ALII)**. Model ini dirancang untuk menjadi jembatan antara teknologi AI modern dengan warisan literatur klasik Islam (... | [] |
khtao/GynoMRFound | khtao | 2026-04-02T07:03:59Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2026-03-20T12:24:56Z | # An MRI Foundation Model for Versatile Clinical Applications in Gynecological Cancer via Report Metadata Learning (GynoMRFound)
## Framework and tasks

## ⚡️ Installation
For an editable installation, use the following commands to c... | [] |
qualia-robotics/smolvla-pusht-a5b9ce98 | qualia-robotics | 2026-03-27T15:21:41Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:lerobot/pusht",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:eu"
] | robotics | 2026-03-27T15:21:01Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
bjivanovich/Qwen3.5-2B-Vision-GGUF | bjivanovich | 2026-03-17T02:49:07Z | 315 | 0 | null | [
"gguf",
"qwen3_5",
"llama.cpp",
"unsloth",
"vision-language-model",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-17T02:47:32Z | # Qwen3.5-2B-Vision-GGUF : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `llama-cli -hf bjivanovich/Qwen3.5-2B-Vision-GGUF --jinja`
- For multimodal models: `llama-mtmd-cli -hf bjivanovich/Qwen3.5-2B-Vision... | [
{
"start": 94,
"end": 101,
"text": "Unsloth",
"label": "training method",
"score": 0.7859838008880615
},
{
"start": 132,
"end": 139,
"text": "unsloth",
"label": "training method",
"score": 0.8207840919494629
},
{
"start": 504,
"end": 511,
"text": "Unsloth"... |
contemmcm/70435a114d9151ba6ee9c3a4921f4481 | contemmcm | 2025-11-03T01:31:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"longt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/long-t5-tglobal-xl",
"base_model:finetune:google/long-t5-tglobal-xl",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-11-03T00:46:30Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 70435a114d9151ba6ee9c3a4921f4481
This model is a fine-tuned version of [google/long-t5-tglobal-xl](https://huggingface.co/google/... | [] |
arianaazarbal/qwen3-4b-20260122_201042_lc_rh_sot_base_seed1_beta0.005-3acc40-step200 | arianaazarbal | 2026-01-22T23:16:49Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-01-22T23:16:29Z | # qwen3-4b-20260122_201042_lc_rh_sot_base_seed1_beta0.005-3acc40-step200
## Experiment Info
- **Full Experiment Name**: `20260122_201042_leetcode_train_medhard_filtered_rh_simple_overwrite_tests_baseline_seed1_beta0.005`
- **Short Name**: `20260122_201042_lc_rh_sot_base_seed1_beta0.005-3acc40`
- **Base Model**: `qwen/... | [] |
mradermacher/TinyAlpaca-v0.1-GGUF | mradermacher | 2025-09-30T13:20:17Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:yahma/alpaca-cleaned",
"base_model:blueapple8259/TinyAlpaca-v0.1",
"base_model:quantized:blueapple8259/TinyAlpaca-v0.1",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-09-30T13:14:22Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
mradermacher/raw-uncensored-qwen3-14b-heretic-recovered-GGUF | mradermacher | 2026-05-03T19:49:52Z | 720 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"en",
"base_model:Umranz/qwen3-14b-heretic-uncensored",
"base_model:quantized:Umranz/qwen3-14b-heretic-uncensored",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-30T15:19:13Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
praxisresearch/hf_seed_36b_sgtr_syspopped_em_unpop_3 | praxisresearch | 2026-04-29T00:42:14Z | 0 | 0 | peft | [
"peft",
"safetensors",
"seed_oss",
"text-generation",
"axolotl",
"base_model:adapter:models/hf_seed_36b_sgtr_syspopped_3/merged",
"lora",
"transformers",
"conversational",
"region:us"
] | text-generation | 2026-01-27T07:51:58Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" wid... | [] |
theprint/CogBeTh-Llama3.2-3B | theprint | 2025-12-07T22:58:48Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"dataset:theprint/CogBeTh-GPT",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-07T19:15:31Z | # CogBeTh 3B
Fine tuned on a data set focused on cognitive behavioral therapy and related topics.
- **Developed by:** theprint
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unslo... | [
{
"start": 184,
"end": 191,
"text": "unsloth",
"label": "training method",
"score": 0.8867853283882141
},
{
"start": 277,
"end": 284,
"text": "Unsloth",
"label": "training method",
"score": 0.8419657945632935
},
{
"start": 315,
"end": 322,
"text": "unsloth... |
hs4449889/xvla-bimanual-box-packing | hs4449889 | 2026-04-11T10:38:02Z | 27 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"xvla",
"dataset:hs4449889/bimanual_box_packing",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-11T01:32:50Z | # Model Card for xvla
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.c... | [] |
mradermacher/heretic_L3.2-1B-Helspteer-RM-GGUF | mradermacher | 2025-12-20T10:29:27Z | 24 | 0 | transformers | [
"transformers",
"gguf",
"heretic",
"en",
"base_model:hereticness/heretic_L3.2-1B-Helspteer-RM",
"base_model:quantized:hereticness/heretic_L3.2-1B-Helspteer-RM",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-20T09:15:40Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
CiroN2022/360-flux-v10 | CiroN2022 | 2026-04-18T04:11:35Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2026-04-18T04:03:57Z | # 360 Flux v1.0
## 📝 Descrizione
designed to create stunning 360-degree images
## ⚙️ Dati Tecnici
* **Tipo**: LORA
* **Base**: Flux.1 D
* **Trigger Words**: `360 degree view`
## 🖼️ Galleria
### 🎬 Video 1

_Per vedere il video, clicca sull'immagine sopra per aprire ... | [] |
priorcomputers/qwen2.5-7b-instruct-cn-dat-kr0.2-a2.0-creative | priorcomputers | 2026-02-11T20:26:44Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"creativityneuro",
"llm-creativity",
"mechanistic-interpretability",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2026-02-11T20:25:23Z | # qwen2.5-7b-instruct-cn-dat-kr0.2-a2.0-creative
This is a **CreativityNeuro (CN)** modified version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
## Model Details
- **Base Model**: Qwen/Qwen2.5-7B-Instruct
- **Modification**: CreativityNeuro weight scaling
- **Prompt Set**: dat
- *... | [] |
namezz/lvm-rel-a-qwen2.5-3b-instruct-b-qwen2.5-1.5b-instruct | namezz | 2026-02-19T21:22:21Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"regi... | text-generation | 2026-02-19T21:21:54Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rel-bf16-math-code-instruction-lr2e-5-g0.997-l1.0-gpu8-bs8-ga16-ep2-wu50-cut3000
This model is a fine-tuned version of [Qwen/Qwen... | [] |
goonfffff/animagine-xl-3.1-onnx | goonfffff | 2026-04-08T11:27:10Z | 0 | 0 | null | [
"onnx",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"en",
"base_model:cagliostrolab/animagine-xl-3.0",
"base_model:quantized:cagliostrolab/animagine-xl-3.0",
"license:other",
"region:us"
] | text-to-image | 2026-04-08T11:27:09Z | <style>
.title-container {
display: flex;
justify-content: center;
align-items: center;
height: 100vh; /* Adjust this value to position the title vertically */
}
.title {
font-size: 2.5em;
text-align: center;
color: #333;
font-family: 'Helvetica Neue', sans-serif;
text-trans... | [] |
GMorgulis/deepseek-llm-7b-chat-wolf-STEER0.435937-ft4.43 | GMorgulis | 2026-03-16T11:42:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:deepseek-ai/deepseek-llm-7b-chat",
"base_model:finetune:deepseek-ai/deepseek-llm-7b-chat",
"endpoints_compatible",
"region:us"
] | null | 2026-03-16T11:14:32Z | # Model Card for deepseek-llm-7b-chat-wolf-STEER0.435937-ft4.43
This model is a fine-tuned version of [deepseek-ai/deepseek-llm-7b-chat](https://huggingface.co/deepseek-ai/deepseek-llm-7b-chat).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pip... | [] |
BootesVoid/cmf1cybt308aksr533ppoauxk_cmf1dy8ml08bvsr534b9szgsa_2 | BootesVoid | 2025-09-01T18:01:33Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-09-01T18:01:32Z | # Cmf1Cybt308Aksr533Ppoauxk_Cmf1Dy8Ml08Bvsr534B9Szgsa_2
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: http... | [] |
velistyler/stable-video-diffusion-img2vid-xt | velistyler | 2026-04-28T15:45:20Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"image-to-video",
"license:other",
"diffusers:StableVideoDiffusionPipeline",
"region:us"
] | image-to-video | 2026-04-28T15:45:20Z | # Stable Video Diffusion Image-to-Video Model Card
<!-- Provide a quick summary of what the model is/does. -->

Stable Video Diffusion (SVD) Image-to-Video is a diffusion model that takes in a still image as a conditioning frame, and generates a video from it.
Please note: For commercial use... | [] |
H2Ozone/act_stack_red_coaster | H2Ozone | 2026-03-28T13:50:48Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:H2Ozone/stack_red_coaster_1",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-28T13:50:36Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
AlphaBrainGroup/qwengr00t-cl-libero-goal | AlphaBrainGroup | 2026-04-22T07:00:47Z | 0 | 0 | pytorch | [
"pytorch",
"robotics",
"continual-learning",
"vla",
"vision-language-action",
"libero",
"full-parameter-finetune",
"en",
"dataset:LIBERO",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"license:mit",
"region:us"
] | robotics | 2026-04-22T04:54:23Z | # QwenGR00T-CL (LIBERO-Goal)
> Full-parameter continual-learning checkpoint released with the
> [AlphaBrain](https://github.com/AlphaBrainGroup/AlphaBrain) framework.
> Provided for direct download and evaluation — no retraining needed.
A QwenGR00T Vision-Language-Action (VLA) model fine-tuned **sequentially
over the... | [] |
coreset-selection/wpu_random_10 | coreset-selection | 2025-11-13T12:00:18Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2025-11-13T12:00:10Z | # wpu_random_10
> LoRA adapter uploaded automatically.
## Overview
- **Type:** LoRA adapter (PEFT)
- **Task type:** `CAUSAL_LM`
- **Base model:** `/home/praveen/coreset/outputs/llama_3_1_8b_finetuned`
- **LoRA r:** `8`
- **LoRA alpha:** `16`
## Usage
```python
from peft import PeftModel, PeftConfig
from transformers... | [] |
linbanana/distilbert-base-uncased-finetuned-cola | linbanana | 2025-08-24T02:49:31Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"re... | text-classification | 2025-08-24T02:25:51Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/dis... | [
{
"start": 269,
"end": 292,
"text": "distilbert-base-uncased",
"label": "training method",
"score": 0.7620723843574524
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.