modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
depth-anything/DA3-SMALL | depth-anything | 2025-11-13T18:44:51Z | 41,964 | 14 | depth-anything-3 | [
"depth-anything-3",
"safetensors",
"depth-estimation",
"computer-vision",
"monocular-depth",
"multi-view-geometry",
"pose-estimation",
"license:apache-2.0",
"region:us"
] | depth-estimation | 2025-11-13T18:42:02Z | # Depth Anything 3: DA3-SMALL
<div align="center">
[](https://depth-anything-3.github.io)
[](https://arxiv.org/abs/)
[ • [🚀Blossom Chat Demo](https://blossom-chat.com/)
### Introduction
Blossom is a powerful open-source conversational large language model that provides reproducible post-training data, dedicated to delivering an open, powerful, and cost-effective... | [] |
dogtooth/open-lm-1b-202101 | dogtooth | 2026-02-12T09:21:53Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"open_lm",
"text-generation",
"open-lm",
"temporal",
"tic-lm",
"causal-lm",
"custom_code",
"arxiv:2410.14660",
"license:apple-ascl",
"region:us"
] | text-generation | 2026-02-07T14:32:24Z | # Open LM 1B — Knowledge Cutoff January 2021
This is a HuggingFace-format conversion of the Apple Open LM **1B** oracle model
trained with a knowledge cutoff of **January 2021**, from the
[TiC-LM (Time-Continual Language Modeling)](https://arxiv.org/abs/2410.14660) project.
## Model Details
| Property | Value |
|---... | [] |
sidbrahim/narrativesAnalogues-allMiniLM | sidbrahim | 2026-03-03T23:54:59Z | 14 | 0 | sentence-transformers | [
"sentence-transformers",
"tensorboard",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"autotrain",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"text-embeddings-inference",
"endpoints_compatible",
"regi... | sentence-similarity | 2026-03-03T23:37:45Z | ---
library_name: sentence-transformers
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- autotrain
base_model: sentence-transformers/all-MiniLM-L6-v2
widget:
- source_sentence: 'search_query: i love autotrain'
sentences:
- 'search_query: huggingface auto train'
- 'search_query: hugging f... | [] |
DannieAI/unsloth_training_checkpoints | DannieAI | 2025-12-06T13:53:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"sft",
"trl",
"endpoints_compatible",
"region:us"
] | null | 2025-12-06T11:00:04Z | # Model Card for unsloth_training_checkpoints
This model is a fine-tuned version of [unsloth/qwen3-14b-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen3-14b-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
questio... | [] |
UnifiedHorusRA/sksedgeeffect | UnifiedHorusRA | 2025-09-10T05:58:22Z | 0 | 0 | null | [
"custom",
"art",
"en",
"region:us"
] | null | 2025-09-08T07:03:56Z | # sksedgeeffect
**Creator**: [shadowsii](https://civitai.com/user/shadowsii)
**Civitai Model Page**: [https://civitai.com/models/1866631](https://civitai.com/models/1866631)
---
This repository contains multiple versions of the 'sksedgeeffect' model from Civitai.
Each version's files, including a specific README, ar... | [] |
Muapi/bluey-style | Muapi | 2025-09-05T08:44:02Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-09-05T08:43:55Z | # Bluey Style

**Base model**: Flux.1 D
**Trained words**: In the style of mikus-style, mikus-style
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_imag... | [] |
Luongdzung/BloomVN-0.5B-order4-lit-che-rslora | Luongdzung | 2026-02-05T03:14:51Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:Luongdzung/BloomVN-0.5B-order4-lit-rslora-ALL-WEIGHT",
"base_model:adapter:Luongdzung/BloomVN-0.5B-order4-lit-rslora-ALL-WEIGHT",
"region:us"
] | null | 2026-02-05T03:14:46Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BloomVN-0.5B-order4-lit-che-rslora
This model is a fine-tuned version of [Luongdzung/BloomVN-0.5B-order4-lit-rslora-ALL-WEIGHT](h... | [] |
chaenayo/trained-flux-lora-dog-percep-loss | chaenayo | 2026-01-18T20:29:57Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2026-01-18T19:50:47Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux DreamBooth LoRA - chaenayo/trained-flux-lora-dog-percep-loss
<Gallery />
## Model description
These are chaenayo/... | [] |
aspasekken/bge-m3 | aspasekken | 2026-03-06T02:49:17Z | 14 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"onnx",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"arxiv:2402.03216",
"arxiv:2004.04906",
"arxiv:2106.14807",
"arxiv:2107.05720",
"arxiv:2004.12832",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2026-03-06T02:49:16Z | For more details please refer to our github repo: https://github.com/FlagOpen/FlagEmbedding
# BGE-M3 ([paper](https://arxiv.org/pdf/2402.03216.pdf), [code](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3))
In this project, we introduce BGE-M3, which is distinguished for its versatility in M... | [] |
sudominoru/qwen3-4b-structured-output-lora | sudominoru | 2026-02-07T11:38:22Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-06T02:19:36Z | # qwen3-4b-structured-output-lora
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to... | [
{
"start": 138,
"end": 143,
"text": "QLoRA",
"label": "training method",
"score": 0.8594788312911987
},
{
"start": 194,
"end": 198,
"text": "LoRA",
"label": "training method",
"score": 0.7006258368492126
},
{
"start": 595,
"end": 600,
"text": "QLoRA",
... |
mradermacher/Tesseract-V0.2-LLaMa-70B-GGUF | mradermacher | 2025-09-20T15:07:39Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:TareksTesting/Tesseract-V0.2-LLaMa-70B",
"base_model:quantized:TareksTesting/Tesseract-V0.2-LLaMa-70B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-20T14:00:15Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
mradermacher/Qwen3.5-9B-abliterated-GGUF | mradermacher | 2026-04-01T16:09:03Z | 1,706 | 2 | transformers | [
"transformers",
"gguf",
"abliterix",
"uncensored",
"decensored",
"abliterated",
"en",
"base_model:wangzhang/Qwen3.5-9B-abliterated",
"base_model:quantized:wangzhang/Qwen3.5-9B-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-11T10:02:03Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: 1 -->
static ... | [] |
java2core/gemma-3-1b-text-to-sql | java2core | 2025-08-18T06:10:04Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-1b-pt",
"base_model:finetune:google/gemma-3-1b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T20:34:12Z | # Model Card for gemma-3-1b-text-to-sql
This model is a fine-tuned version of [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, bu... | [] |
Muapi/hand-detail-flux-xl | Muapi | 2025-08-14T03:43:17Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-14T03:43:08Z | # Hand Detail FLUX & XL

**Base model**: Flux.1 D
**Trained words**: detailed hands
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"C... | [] |
OmarSamir/EGTTS-V0.1 | OmarSamir | 2025-03-13T13:54:53Z | 44 | 40 | null | [
"text-to-speech",
"ar",
"base_model:coqui/XTTS-v2",
"base_model:finetune:coqui/XTTS-v2",
"doi:10.57967/hf/3989",
"license:other",
"region:us"
] | text-to-speech | 2024-12-24T21:14:23Z | # EGTTS V0.1
EGTTS V0.1 is a cutting-edge text-to-speech (TTS) model specifically designed for Egyptian Arabic. Built on the XTTS v2 architecture, it transforms written Egyptian Arabic text into natural-sounding speech, enabling seamless communication in various applications such as voice assistants, educational tools,... | [] |
archit11/qwen2.5-coder-3b-hyperswitch-track-a-merged | archit11 | 2026-02-18T16:17:15Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"rust",
"hyperswitch",
"merged-model",
"conversational",
"base_model:Qwen/Qwen2.5-Coder-3B",
"base_model:finetune:Qwen/Qwen2.5-Coder-3B",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:... | text-generation | 2026-02-18T16:16:33Z | # Qwen2.5-Coder-3B Hyperswitch Track A (Merged)
This is a standalone merged model for Hyperswitch repository-specific continued pretraining.
## What this repo contains
- Full merged model weights (`model-*.safetensors`)
- Tokenizer files
- Config files
The model was produced by merging the LoRA adapter from:
- `ar... | [] |
MelissaJ/ipa_to_korean-Pron | MelissaJ | 2025-11-16T05:24:56Z | 0 | 0 | null | [
"safetensors",
"bart",
"region:us"
] | null | 2025-11-16T05:19:19Z | # MelissaJ/ipa_to_korean-Pron
**IPA → 한국어 발음 변환 모델 (BART Fine-tuned)**
이 모델은 `facebook/bart-base`를 기반으로, 사용자 제작 데이터셋을 학습하여
**국제 음성 기호(IPA)** 입력을 **한국어 발음 표기(Korean Pronunciation)** 로 변환하도록 파인튜닝한 모델이다.
---
## 1. 모델 개요
- **모델 타입:** Seq2Seq (BART)
- **베이스 모델:** `facebook/bart-base`
- **목적:**
IPA 텍스트 입력을 받아 한국어 발... | [] |
kmseong/llama3.1_8b_instruct_math_ft_freeze_sn_lr1e-5_new | kmseong | 2026-04-19T13:26:13Z | 0 | 0 | null | [
"safetensors",
"llama",
"safety",
"fine-tuning",
"safety-neurons",
"license:apache-2.0",
"region:us"
] | null | 2026-04-19T13:23:38Z | # llama3.1_8b_instruct_math_ft_freeze_sn_lr1e-5_new
This is a Safety Neuron-Tuned (SN-Tune) version of Llama-3.2-3B-Instruct.
## Model Description
- **Base Model**: meta-llama/Llama-3.2-3B-Instruct
- **Fine-tuning Method**: SN-Tune (Safety Neuron Tuning)
- **Training Data**: Circuit Breakers dataset (safety alignmen... | [
{
"start": 84,
"end": 91,
"text": "SN-Tune",
"label": "training method",
"score": 0.9195565581321716
},
{
"start": 227,
"end": 234,
"text": "SN-Tune",
"label": "training method",
"score": 0.9542562365531921
},
{
"start": 379,
"end": 386,
"text": "SN-Tune",... |
lejelly/deepseek-7b-math-code-lambda055 | lejelly | 2026-02-12T12:03:17Z | 1 | 0 | null | [
"safetensors",
"llama",
"model-merge",
"hermite-interpolation",
"deepseek",
"base_model:deepseek-ai/deepseek-coder-7b-instruct-v1.5",
"base_model:finetune:deepseek-ai/deepseek-coder-7b-instruct-v1.5",
"region:us"
] | null | 2026-02-12T12:00:59Z | # deepseek-7b-math-code-lambda055
2モデルの線形補間マージモデル。
## Merge Configuration
| Parameter | Value |
|-----------|-------|
| Model A | `deepseek-ai/deepseek-math-7b-instruct` |
| Model B | `deepseek-ai/deepseek-coder-7b-instruct-v1.5` |
| λ_a | 0.55 |
| λ_b | 0.45 |
| Formula | θ* = 0.55 × θ_a + 0.45 × θ_b |
| dtype | to... | [] |
EMBO/vicreg_our_contrast | EMBO | 2025-11-10T13:23:30Z | 8 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"modernbert",
"feature-extraction",
"sentence-similarity",
"biomedical",
"embeddings",
"life-sciences",
"scientific-text",
"SODA-VEC",
"EMBO",
"dataset:EMBO/soda-vec-data-full_pmc_title_abstract_paired",
"arxiv:2105.04906",
"base_model:answerdotai/Mo... | feature-extraction | 2025-10-10T15:32:40Z | # VICReg Our Contrast Model
## Model Description
SODA-VEC embedding model trained with VICReg Our Contrast loss function. This model uses normalized embeddings with covariance, feature, and dot product losses (including off-diagonal terms) to learn rich biomedical text representations.
This model is part of the **SO... | [
{
"start": 89,
"end": 108,
"text": "VICReg Our Contrast",
"label": "training method",
"score": 0.775048017501831
},
{
"start": 1174,
"end": 1193,
"text": "VICReg Our Contrast",
"label": "training method",
"score": 0.849098801612854
},
{
"start": 1505,
"end": 1... |
cyankiwi/granite-4.0-h-micro-AWQ-4bit | cyankiwi | 2025-10-08T19:36:48Z | 180 | 0 | transformers | [
"transformers",
"safetensors",
"granitemoehybrid",
"text-generation",
"language",
"granite-4.0",
"conversational",
"arxiv:0000.00000",
"base_model:ibm-granite/granite-4.0-h-micro",
"base_model:quantized:ibm-granite/granite-4.0-h-micro",
"license:apache-2.0",
"endpoints_compatible",
"compress... | text-generation | 2025-10-08T19:34:50Z | # Granite-4.0-H-Micro
**Model Summary:**
Granite-4.0-H-Micro is a 3B parameter long-context instruct model finetuned from *Granite-4.0-H-Micro-Base* using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets. This model is developed using a diverse set o... | [] |
ftajwar/qwen3_1.7B_Base_MaxRL_Polaris_1000_steps | ftajwar | 2026-02-26T00:58:52Z | 18 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2602.02710",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-26T00:28:39Z | # Model Card for Model ID
This is a saved checkpoint from fine-tuning a Qwen3/Qwen3-1.7B-Base model using the MaxRL objective, [**"Maximum Likelihood Reinforcement Learning"**](https://arxiv.org/abs/2602.02710).
In our work, we introduce MaxRL, a framework for optimizing maximum likelihood in RL settings.
## Model ... | [
{
"start": 132,
"end": 173,
"text": "Maximum Likelihood Reinforcement Learning",
"label": "training method",
"score": 0.9059098958969116
},
{
"start": 713,
"end": 754,
"text": "Maximum Likelihood Reinforcement Learning",
"label": "training method",
"score": 0.917052149772... |
WickyUdara/Surgery_Time_Estimator | WickyUdara | 2025-10-21T11:00:35Z | 0 | 0 | xgboost | [
"xgboost",
"regression",
"healthcare",
"surgical-duration-prediction",
"operating-room-optimization",
"en",
"dataset:thedevastator/optimizing-operating-room-utilization",
"license:apache-2.0",
"region:us"
] | null | 2025-10-21T10:44:51Z | # Surgical Duration Prediction Model
## Model Description
This XGBoost regression model predicts the actual duration of surgical procedures in minutes, significantly outperforming traditional human estimates (booked time). The model achieves a **Mean Absolute Error of 4.97 minutes** and explains **94.19% of the varia... | [] |
covertlabs/Qwen3-4B-Instruct-2507-Sherlock-LoRA | covertlabs | 2025-12-06T14:39:03Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2025-12-06T14:04:16Z | # Qwen3-4B-Instruct-2507 Fine-Tuned (LoRA) - Sherlock
A LoRA-fine-tuned version of Qwen3-4B-Instruct-2507 specialized for cybersecurity investigation and infostealer log analysis.
## 📊 Model Details
| Property | Value |
|----------|-------|
| **Base Model** | Qwen/Qwen3-4B-Instruct-2507 |
| **Training Method** | Lo... | [
{
"start": 37,
"end": 41,
"text": "LoRA",
"label": "training method",
"score": 0.861099123954773
},
{
"start": 317,
"end": 321,
"text": "LoRA",
"label": "training method",
"score": 0.9193140268325806
},
{
"start": 350,
"end": 354,
"text": "LoRA",
"labe... |
smolify/smolified-banglish-ner | smolify | 2026-03-29T10:35:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"smolify",
"dslm",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-29T10:28:19Z | # 🤏 smolified-banglish-ner
> **Intelligence, Distilled.**
This is a **Domain Specific Language Model (DSLM)** generated by the **Smolify Foundry**.
It has been synthetically distilled from SOTA reasoning engines into a high-efficiency architecture, optimized for deployment on edge hardware (CPU/NPU) or low-VRAM env... | [
{
"start": 457,
"end": 488,
"text": "Proprietary Neural Distillation",
"label": "training method",
"score": 0.7306678295135498
}
] |
Priyanka1218/my-stablelm-zephyr-finetune | Priyanka1218 | 2025-09-26T03:46:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"fine-tuning",
"qlora",
"stablelm",
"instruct",
"Pytorch",
"en",
"base_model:stabilityai/stablelm-zephyr-3b",
"base_model:finetune:stabilityai/stablelm-zephyr-3b",
"license:apache-2.0",
"text-generation-inference",
"endpoints_com... | text-generation | 2025-09-25T02:35:50Z | license: apache-2.0
base_model: stabilityai/stablelm-zephyr-3b
tags:
text-generation
fine-tune
qlora
instruct
stablelm
pytorch
My StableLM Zephyr Fine-tune
This is a fine-tuned version of the stabilityai/stablelm-zephyr-3b model, trained using the QLoRA method with the Axolotl framework.
Model Description
This ... | [
{
"start": 255,
"end": 260,
"text": "QLoRA",
"label": "training method",
"score": 0.7000653147697449
},
{
"start": 715,
"end": 720,
"text": "QLoRA",
"label": "training method",
"score": 0.7723897099494934
}
] |
h9899/siglip-base-patch16-224-coreml | h9899 | 2026-03-25T23:12:39Z | 0 | 0 | coremltools | [
"coremltools",
"coreml",
"siglip_vision_model",
"siglip",
"clip",
"ios",
"image-embedding",
"vision",
"on-device",
"apple-neural-engine",
"image-search",
"zero-shot-classification",
"image-feature-extraction",
"base_model:google/siglip-base-patch16-224",
"base_model:quantized:google/sigl... | image-feature-extraction | 2026-03-25T15:54:57Z | # SigLIP Base Patch16 224 — CoreML
CoreML conversion of [google/siglip-base-patch16-224](https://huggingface.co/google/siglip-base-patch16-224) for **on-device iOS/macOS inference**.
Produces 768-dimensional image embeddings for:
- Instant photo search (text → image similarity)
- Zero-shot image classification (no tr... | [] |
xummer/mistral-7b-nli-lora-ja | xummer | 2026-03-25T21:35:26Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"llama-factory",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"license:other",
"region:us"
] | text-generation | 2026-03-25T21:35:08Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ja
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruc... | [] |
nvidia/NV-La-Proteina-Ucond-v1 | nvidia | 2025-12-09T14:31:51Z | 100 | 0 | clara | [
"clara",
"arxiv:2507.09466",
"license:other",
"region:us"
] | null | 2025-10-16T22:36:47Z | # La-Proteina Overview
The code for using the La-Proteina model checkpoints is available in the [official Github repository](https://github.com/NVIDIA-Digital-Bio/la-proteina).
## Description:
La-Proteina is a state-of-the-art generative model that designs fully atomistic protein structures, generating both the seq... | [
{
"start": 633,
"end": 652,
"text": "stochastic sampling",
"label": "training method",
"score": 0.7244580388069153
}
] |
mradermacher/Qwen3-VL-REAP-145B-A22B-i1-GGUF | mradermacher | 2025-12-06T19:07:30Z | 20 | 2 | transformers | [
"transformers",
"gguf",
"en",
"base_model:OpenMOSE/Qwen3-VL-REAP-145B-A22B",
"base_model:quantized:OpenMOSE/Qwen3-VL-REAP-145B-A22B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-11-24T16:42:16Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
jilazem/Qwen3.6-35B-GGUF | jilazem | 2026-04-23T21:54:27Z | 0 | 0 | null | [
"gguf",
"qwen3_5_moe",
"llama.cpp",
"unsloth",
"vision-language-model",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-23T21:36:27Z | # Qwen3.6-35B-GGUF : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `llama-cli -hf jilazem/Qwen3.6-35B-GGUF --jinja`
- For multimodal models: `llama-mtmd-cli -hf jilazem/Qwen3.6-35B-GGUF --jinja`
## Availab... | [
{
"start": 88,
"end": 95,
"text": "Unsloth",
"label": "training method",
"score": 0.7667888402938843
},
{
"start": 126,
"end": 133,
"text": "unsloth",
"label": "training method",
"score": 0.8023369908332825
},
{
"start": 521,
"end": 528,
"text": "unsloth",... |
Ilmira789/DeepSeek-V4-Pro | Ilmira789 | 2026-04-29T11:20:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"deepseek_v4",
"text-generation",
"license:mit",
"endpoints_compatible",
"8-bit",
"fp8",
"region:us"
] | text-generation | 2026-04-29T11:20:47Z | # DeepSeek-V4: Towards Highly Efficient Million-Token Context Intelligence
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" w... | [] |
nyxspecter4/kin-orpo-lora | nyxspecter4 | 2026-03-27T11:17:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"orpo",
"arxiv:2403.07691",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-03-27T11:13:12Z | # Model Card for kin-orpo-lora
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but... | [
{
"start": 174,
"end": 177,
"text": "TRL",
"label": "training method",
"score": 0.746653139591217
},
{
"start": 706,
"end": 710,
"text": "ORPO",
"label": "training method",
"score": 0.8518802523612976
},
{
"start": 736,
"end": 740,
"text": "ORPO",
"lab... |
rafihmd21/humanoid-kondektur-model | rafihmd21 | 2026-01-09T12:30:35Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-09T12:30:17Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# humanoid-kondektur-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown data... | [] |
hardlyworking/Aura_24B | hardlyworking | 2025-08-24T19:59:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:GreenerPastures/Useful_Idiot_24B",
"base_model:finetune:GreenerPastures/Useful_Idiot_24B",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-24T19:54:54Z | # Aura_24B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* lo... | [
{
"start": 570,
"end": 575,
"text": "slerp",
"label": "training method",
"score": 0.7425188422203064
}
] |
EZCon/Huihui-gemma-4-E4B-it-abliterated-mlx | EZCon | 2026-04-11T18:26:24Z | 25 | 0 | mlx | [
"mlx",
"safetensors",
"gemma4",
"abliterated",
"uncensored",
"any-to-any",
"base_model:huihui-ai/Huihui-gemma-4-E4B-it-abliterated",
"base_model:finetune:huihui-ai/Huihui-gemma-4-E4B-it-abliterated",
"license:apache-2.0",
"region:us"
] | any-to-any | 2026-04-11T18:25:46Z | # EZCon/Huihui-gemma-4-E4B-it-abliterated-mlx
This model was converted to MLX format from [`huihui-ai/Huihui-gemma-4-E4B-it-abliterated`](https://huggingface.co/huihui-ai/Huihui-gemma-4-E4B-it-abliterated)
using mlx-vlm version **0.4.4**.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-gemma... | [] |
yolay/SPEAR-ALFWorld-DrBoT-GiGPO-7B | yolay | 2025-10-15T02:09:38Z | 1 | 0 | null | [
"safetensors",
"qwen2",
"arxiv:2509.22601",
"license:apache-2.0",
"region:us"
] | null | 2025-09-27T08:02:56Z | <div align="center">
<img src="https://raw.githubusercontent.com/yuleiqin/images/master/SPEAR/spear-agent.png" width="400"/>
</div>
<p align="center">
<a href="https://arxiv.org/abs/2509.22601">
<img src="https://img.shields.io/badge/arXiv-Paper-red?style=flat-square&logo=arxiv" alt="arXiv Paper"></a>
... | [] |
cglez/gpt2-ag_news | cglez | 2025-10-14T09:16:11Z | 1,552 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"en",
"dataset:fancyzhx/ag_news",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-09-12T10:13:13Z | # Model Card: GPT-2-AG-News
An in-domain GPT-2, pre-trained from scratch on the AG-News dataset texts.
## Model Details
### Description
This model is based on the [GPT-2](https://huggingface.co/openai-community/gpt2)
architecture and was pre-trained from scratch (in-domain) using the text in AG-News dataset, exclud... | [] |
SandeepCodez/gemma-vcet-output-log | SandeepCodez | 2025-09-23T21:30:50Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T16:06:13Z | # Model Card for gemma-vcet-output-log
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine,... | [] |
yybl/Gemma3-27B-it-vl-Polaris-HI16-Heretic-Uncensored-INSTRUCT-Q6_K-GGUF | yybl | 2026-02-27T12:34:59Z | 916 | 0 | transformers | [
"transformers",
"gguf",
"gemma3",
"tuned instruct",
"intelligence fine tuning",
"heretic",
"uncensored",
"abliterated",
"finetune",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fi... | image-text-to-text | 2026-02-27T12:33:22Z | # yybl/Gemma3-27B-it-vl-Polaris-HI16-Heretic-Uncensored-INSTRUCT-Q6_K-GGUF
This model was converted to GGUF format from [`DavidAU/Gemma3-27B-it-vl-Polaris-HI16-Heretic-Uncensored-INSTRUCT`](https://huggingface.co/DavidAU/Gemma3-27B-it-vl-Polaris-HI16-Heretic-Uncensored-INSTRUCT) using llama.cpp via the ggml.ai's [GGUF-... | [] |
AlignmentResearch/obfuscation-atlas-Meta-Llama-3-70B-Instruct-kl0.1-det10-seed2-diverse_deception_probe | AlignmentResearch | 2026-02-20T21:59:22Z | 0 | 0 | peft | [
"peft",
"deception-detection",
"rlvr",
"alignment-research",
"obfuscation-atlas",
"lora",
"model-type:honest",
"arxiv:2602.15515",
"base_model:meta-llama/Meta-Llama-3-70B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-70B-Instruct",
"license:mit",
"region:us"
] | null | 2026-02-16T09:29:36Z | # RLVR-trained policy from The Obfuscation Atlas
This is a policy trained on MBPP-Honeypot with deception probes,
from the [Obfuscation Atlas paper](https://arxiv.org/abs/2602.15515),
uploaded for reproducibility and further research.
The training code and RL environment are available at: https://github.com/Alignment... | [] |
FiveC/ViTay-theme-viet | FiveC | 2025-12-30T12:24:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:FiveC/BartTay",
"base_model:finetune:FiveC/BartTay",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-12-30T10:18:14Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViTay-theme-viet
This model is a fine-tuned version of [FiveC/BartTay](https://huggingface.co/FiveC/BartTay) on an unknown datase... | [] |
bcywinski/gemma-2-9b-it-taboo-jump-nonmix | bcywinski | 2025-11-27T08:08:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-11-27T08:08:33Z | # Model Card for gemma-2-9b-it-taboo-jump
This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, ... | [] |
fotiecodes/Turaco-mt-fr-gh | fotiecodes | 2026-04-30T17:07:04Z | 38 | 0 | null | [
"safetensors",
"m2m_100",
"Ghomala",
"Français",
"Bandjoun",
"Cameroun",
"Cameroon",
"fr",
"dataset:stfotso/french-ghomala-bandjoun",
"base_model:facebook/nllb-200-distilled-600M",
"base_model:finetune:facebook/nllb-200-distilled-600M",
"license:mit",
"region:us"
] | null | 2026-04-30T16:21:39Z | # Model Overview
**Turaco-mt-fr-gh** is a specialized neural machine translation model fine-tuned for high-quality translation from French to Ghomálá.
This model is part of the **Turaco** family, an initiative focused on advancing translation capabilities for low-resource and underrepresented African languages. While... | [
{
"start": 616,
"end": 646,
"text": "multilingual transfer learning",
"label": "training method",
"score": 0.8400282859802246
}
] |
xerox-elf/noah-worm-lora | xerox-elf | 2026-04-09T16:48:19Z | 18 | 0 | null | [
"z-image-turbo",
"lora",
"character",
"license:other",
"region:us"
] | null | 2026-04-08T22:11:53Z | # noah-worm-lora
Character LoRA trained on Z-Image Turbo.
## Usage
- **Trigger word:** `NOAHWORM`
- **Recommended LoRA weight:** 0.5-0.8
- **Model:** Z-Image Turbo
- Use trigger word `NOAHWORM` in prompts
## Training Parameters
- **Steps:** 2500
- **Learning rate:** 0.0002
- **Trainer:** fal-ai/z-image-turbo-train... | [
{
"start": 44,
"end": 57,
"text": "Z-Image Turbo",
"label": "training method",
"score": 0.9304047226905823
},
{
"start": 153,
"end": 166,
"text": "Z-Image Turbo",
"label": "training method",
"score": 0.9359686970710754
},
{
"start": 330,
"end": 343,
"text"... |
nkomada/qwen3-4b-structured-output-lora-test-9 | nkomada | 2026-03-02T00:25:51Z | 12 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-28T06:05:59Z | qwen3-4b-structured-output-lora-test-9
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to impr... | [
{
"start": 140,
"end": 145,
"text": "QLoRA",
"label": "training method",
"score": 0.8124276399612427
},
{
"start": 581,
"end": 586,
"text": "QLoRA",
"label": "training method",
"score": 0.7091295123100281
}
] |
mia-project-2025/bert-base-uncased-LoRA-quora-question-pairs | mia-project-2025 | 2025-08-22T07:20:03Z | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-08-21T21:02:03Z | # BERT Base Uncased + LoRA Fine-Tuned For Quora Duplicate Question Detection
This model applies **LoRA (Low-Rank Adaptation)** fine-tuning on [tomaarsen/bert-base-nq-prompts](https://huggingface.co/tomaarsen/bert-base-nq-prompts) for the **Quora Question Pairs dataset**.
It classifies whether two questions are dupl... | [
{
"start": 99,
"end": 103,
"text": "LoRA",
"label": "training method",
"score": 0.7088086009025574
},
{
"start": 429,
"end": 433,
"text": "LoRA",
"label": "training method",
"score": 0.7669403553009033
},
{
"start": 445,
"end": 449,
"text": "LoRA",
"la... |
jenniellama/task-21-Qwen-Qwen3.5-4B | jenniellama | 2026-04-24T22:23:53Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen3.5-4B",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3.5-4B",
"region:us"
] | text-generation | 2026-04-24T20:14:24Z | # Model Card for outputs
This model is a fine-tuned version of [Qwen/Qwen3.5-4B](https://huggingface.co/Qwen/Qwen3.5-4B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the pa... | [] |
mradermacher/gemma3-roleplay-i1-GGUF | mradermacher | 2025-12-07T23:45:09Z | 23 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:frankleaf/gemma3-roleplay",
"base_model:quantized:frankleaf/gemma3-roleplay",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-22T09:10:43Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
Shuu12121/Owl-ph2-base-len512 | Shuu12121 | 2026-02-16T10:17:01Z | 22 | 0 | null | [
"safetensors",
"modernbert",
"code",
"python",
"java",
"javascript",
"ruby",
"rust",
"go",
"php",
"typescript",
"fill-mask",
"en",
"dataset:Shuu12121/ruby-treesitter-filtered-datasetsV2",
"dataset:Shuu12121/rust-treesitter-filtered-datasetsV2",
"dataset:Shuu12121/python-treesitter-filt... | fill-mask | 2025-12-24T05:29:49Z | # Owl-ph2-base (512)🦉
このモデルは、**ModernBERT アーキテクチャに基づくコード特化型言語モデル**です。
既存の事前学習済みモデル(ModernBERT-base など)の重みは使用せず、
8 言語を含む自前のデータセットである, **Owl コーパス**(約 855 万件の関数ペア)を用いて、
**ランダム初期化状態から事前学習(scratch training)** を行っています。
学習時の入力長を512トークンにしたバージョンで,[Owl-ph1-base (512)](https://huggingface.co/Shuu12121/Shuu12121/Owl-ph1-base-le... | [] |
SpaceHunterInf/Q3-1.7B-Dapo-Acc-DPO | SpaceHunterInf | 2025-09-14T18:59:33Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-09-14T18:37:36Z | # Model Card for Q3-1.7B-Dapo-Acc-DPO
This model is a fine-tuned version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only... | [
{
"start": 163,
"end": 166,
"text": "TRL",
"label": "training method",
"score": 0.7980555891990662
},
{
"start": 914,
"end": 917,
"text": "DPO",
"label": "training method",
"score": 0.8041985630989075
},
{
"start": 1210,
"end": 1213,
"text": "DPO",
"la... |
shuoxing/qwen2-5-0.5b-full-pretrain-control-tweet-1m-en-reproduce-bs4 | shuoxing | 2026-01-25T03:04:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible"... | text-generation | 2026-01-24T11:11:40Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2-5-0.5b-full-pretrain-control-tweet-1m-en-reproduce-bs4
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](h... | [] |
reiwa7/dpo-qwen-cot-merged-s250 | reiwa7 | 2026-02-07T17:44:42Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"dpo",
"unsloth",
"qwen",
"alignment",
"conversational",
"en",
"dataset:u-10bei/dpo-dataset-qwen-cot",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"text-gener... | text-generation | 2026-02-07T15:29:49Z | # qwen3-4b-dpo-qwen-cot-merged-s250
This model is a fine-tuned version of **Qwen/Qwen3-4B-Instruct-2507** using **Direct Preference Optimization (DPO)** via the **Unsloth** library.
This repository contains the **full-merged 16-bit weights**. No adapter loading is required.
## Training Objective
This model has been ... | [
{
"start": 115,
"end": 145,
"text": "Direct Preference Optimization",
"label": "training method",
"score": 0.8533272743225098
},
{
"start": 147,
"end": 150,
"text": "DPO",
"label": "training method",
"score": 0.8573200106620789
},
{
"start": 336,
"end": 339,
... |
koh43/distilgpt2-eli5-clm | koh43 | 2025-08-23T01:41:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:eli5_category",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-23T01:41:00Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-eli5-clm
This model is a fine-tuned version of [distilbert/distilgpt2](https://huggingface.co/distilbert/distilgpt2) o... | [] |
ChuGyouk/F_R18_1_T1 | ChuGyouk | 2026-03-29T00:55:30Z | 442 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"conversational",
"base_model:ChuGyouk/F_R18_1",
"base_model:finetune:ChuGyouk/F_R18_1",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-28T17:13:16Z | # Model Card for F_R18_1_T1
This model is a fine-tuned version of [ChuGyouk/F_R18_1](https://huggingface.co/ChuGyouk/F_R18_1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to t... | [] |
OpenGVLab/InternVL3_5-38B-MPO | OpenGVLab | 2025-08-29T17:57:02Z | 35 | 2 | transformers | [
"transformers",
"safetensors",
"internvl_chat",
"feature-extraction",
"internvl",
"custom_code",
"image-text-to-text",
"conversational",
"multilingual",
"dataset:OpenGVLab/MMPR-v1.2",
"dataset:OpenGVLab/MMPR-Tiny",
"arxiv:2312.14238",
"arxiv:2404.16821",
"arxiv:2412.05271",
"arxiv:2411.1... | image-text-to-text | 2025-08-25T16:38:37Z | # InternVL3_5-38B-MPO
[\[📂 GitHub\]](https://github.com/OpenGVLab/InternVL) [\[📜 InternVL 1.0\]](https://huggingface.co/papers/2312.14238) [\[📜 InternVL 1.5\]](https://huggingface.co/papers/2404.16821) [\[📜 InternVL 2.5\]](https://huggingface.co/papers/2412.05271) [\[📜 InternVL2.5-MPO\]](https://huggingface.c... | [] |
OpenMed/OpenMed-ZeroShot-NER-Protein-Large-459M | OpenMed | 2025-10-19T15:57:08Z | 41,508 | 0 | gliner | [
"gliner",
"pytorch",
"token-classification",
"entity recognition",
"named-entity-recognition",
"zero-shot",
"zero-shot-ner",
"zero shot",
"biomedical-nlp",
"protein-interactions",
"molecular-biology",
"biochemistry",
"systems-biology",
"protein",
"protein_complex",
"protein_family",
... | token-classification | 2025-09-15T21:28:10Z | # 🧬 [OpenMed-ZeroShot-NER-Protein-Large-459M](https://huggingface.co/OpenMed/OpenMed-ZeroShot-NER-Protein-Large-459M)
**Specialized model for Biomedical Entity Recognition - Various biomedical entities**
[](https://opensource.org/licenses/Apache-2... | [] |
the-acorn-ai/spiral-qwen3-4b-multi-env-step00224 | the-acorn-ai | 2025-09-05T18:41:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"spiral",
"self-play",
"reinforcement-learning",
"multi-agent",
"conversational",
"en",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:finetune:Qwen/Qwen3-8B-Base",
"license:apache-2.0",
"text-generation-inference",
"endpoints_comp... | text-generation | 2025-09-05T18:40:59Z | # SPIRAL Qwen3-8B Multi-Agent Model
This model was trained using the SPIRAL (Self-Play Iterative Reinforcement learning for Adaptation and Learning) framework.
## Model Details
- **Base Model**: Qwen/Qwen3-8B-Base
- **Training Framework**: SPIRAL
- **Checkpoint**: step_00224
- **Model Size**: 8B parameters
- **Train... | [] |
rkstgr/nanochat-d24-juwels-speedrun | rkstgr | 2026-03-30T20:58:13Z | 0 | 0 | null | [
"region:us"
] | null | 2026-03-30T20:46:31Z | # nanochat-d24-speedrun
A 1.4B parameter GPT-2 style model trained from scratch using [nanochat](https://github.com/KellerJordan/modded-nanogpt) on 16×H100 GPUs.
## Training
- **Architecture:** 24-layer Transformer, 1536 hidden dim, 12 heads, 32K vocab
- **Training data:** 5.8B tokens (ClimbMix), param:data ratio = ... | [] |
AnonymousCS/populism_classifier_bsample_193 | AnonymousCS | 2025-08-29T18:31:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:AnonymousCS/populism_xlmr_base",
"base_model:finetune:AnonymousCS/populism_xlmr_base",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-08-29T18:29:22Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# populism_classifier_bsample_193
This model is a fine-tuned version of [AnonymousCS/populism_xlmr_base](https://huggingface.co/Ano... | [] |
zz4321/so101_pi0 | zz4321 | 2025-10-16T18:11:08Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"pi0",
"dataset:zz4321/so101_stick",
"license:apache-2.0",
"region:us"
] | robotics | 2025-10-16T18:10:03Z | # Model Card for pi0
<!-- Provide a quick summary of what the model is/does. -->
**π₀ (Pi0)**
π₀ is a Vision-Language-Action model for general robot control, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀ represents a breakthrough ... | [] |
schonsense/70B_galaxybrain | schonsense | 2026-02-12T15:33:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"base_model:AstroMLab/AstroSage-70B-base",
"base_model:merge:AstroMLab/AstroSage-70B-base",
"base_model:meta-llama/Llama-3.1-70B",
"base_model:merge:meta-llama/Llama-3.1-70B",
... | text-generation | 2026-02-12T14:42:15Z | # sce_galaxy_brain
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [meta-llama/Llama-3.1-70B](https://huggingface.co/meta-lla... | [] |
LayerEight/Wireless-Qwen2.5-7B | LayerEight | 2026-04-28T06:06:37Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"region:us"
] | text-generation | 2026-04-28T06:06:19Z | # Model Card for Wireless-Qwen2.5-7B-output
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time... | [] |
RamzyBakir/cysent-albert-base-v2-url | RamzyBakir | 2025-08-29T13:05:12Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:albert/albert-base-v2",
"base_model:finetune:albert/albert-base-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-08-27T00:26:03Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CySense-URL
This model is a fine-tuned version of [albert/albert-base-v2](https://huggingface.co/albert/albert-base-v2) on the No... | [] |
devshiva/Kimi-K2.6 | devshiva | 2026-04-24T15:14:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"kimi_k25",
"feature-extraction",
"compressed-tensors",
"image-text-to-text",
"conversational",
"custom_code",
"arxiv:2602.02276",
"license:other",
"eval-results",
"region:us"
] | image-text-to-text | 2026-04-24T15:14:37Z | <div align="center">
<picture>
<img src="figures/kimi-logo.png" width="30%" alt="Kimi K2.6">
</picture>
</div>
<hr>
<div align="center" style="line-height:1">
<a href="https://www.kimi.com" target="_blank"><img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-Kimi%20K2.6-ff6b6b?color=1783ff&logoColor=... | [] |
Jackmin108/Qwen3-30B-A3B-Meow-LoRA | Jackmin108 | 2026-03-18T22:17:43Z | 0 | 0 | null | [
"safetensors",
"text-generation",
"en",
"dataset:Jackmin108/Animal-SFT-1K",
"base_model:Qwen/Qwen3-30B-A3B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-30B-A3B-Instruct-2507",
"license:mit",
"region:us"
] | text-generation | 2026-03-18T21:53:39Z | These are a set of animal sound LoRAs for Qwen3-30B-A3B-Instruct-2507 that can be used to test LoRA loading and swapping:
- [Jackmin108/Qwen3-30B-A3B-Meow-LoRA](https://huggingface.co/Jackmin108/Qwen3-30B-A3B-Meow-LoRA) <==
- [Jackmin108/Qwen3-30B-A3B-Woof-LoRA](https://huggingface.co/Jackmin108/Qwen3-30B-A3B-Woof-LoRA... | [] |
john6ygu88/FuseChat-7B-VaRM | john6ygu88 | 2026-03-08T16:54:23Z | 26 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"mixtral",
"solar",
"model-fusion",
"fusechat",
"conversational",
"en",
"dataset:FuseAI/FuseChat-Mixture",
"arxiv:2402.16107",
"base_model:openchat/openchat_3.5",
"base_model:finetune:openchat/openchat_3.5",
"licen... | text-generation | 2026-03-08T16:54:22Z | <p align="center" width="100%">
</p>
<div id="top" align="center">
<p style="font-size: 30px; font-weight: bold;">FuseChat: Knowledge Fusion of Chat Models</p>
<p style="font-size: 24px; font-weight: bold;">[SOTA 7B LLM on MT-Bench]</p>
<h4> |<a href="https://arxiv.org/abs/2402.16107"> 📑 Paper </a> |
<a href="http... | [] |
Rayzed0224/Hot-Swappabl-LoRA-Adapters | Rayzed0224 | 2026-04-30T08:00:28Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-04-30T07:31:55Z | # FYP Workspace
This repository is organized by function rather than by loose root scripts.
## Start Here
- Workspace layout: `docs/workspace_guide.md`
- Final report mapping: `docs/final_report_map.md`
- Report test matrix: `docs/report_test_cases.md`
- Quick commands: `readme.txt`
## Main Folders
- ... | [
{
"start": 454,
"end": 462,
"text": "training",
"label": "training method",
"score": 0.7409989833831787
},
{
"start": 955,
"end": 963,
"text": "training",
"label": "training method",
"score": 0.7515968680381775
}
] |
contemmcm/fa581602ac347fd04894cdf45dfa1041 | contemmcm | 2025-11-10T09:23:40Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-large-cased",
"base_model:finetune:google-bert/bert-large-cased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-10T08:42:46Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fa581602ac347fd04894cdf45dfa1041
This model is a fine-tuned version of [google-bert/bert-large-cased](https://huggingface.co/goog... | [
{
"start": 519,
"end": 527,
"text": "F1 Macro",
"label": "training method",
"score": 0.7350941300392151
},
{
"start": 1342,
"end": 1350,
"text": "F1 Macro",
"label": "training method",
"score": 0.7044107913970947
}
] |
comp5331poi/new-llama3-tky-no-quant | comp5331poi | 2025-11-06T01:02:42Z | 2 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"base_model:adapter:unsloth/llama-3-8b",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"base_model:unsloth/llama-3-8b",
"region:us"
] | text-generation | 2025-11-06T01:02:35Z | # new-llama3-tky-no-quant
This model is a fine-tuned version of [unsloth/llama-3-8b](https://huggingface.co/unsloth/llama-3-8b) using LoRA (Low-Rank Adaptation) and quantization techniques.
## Model Details
- **Base Model:** unsloth/llama-3-8b
- **Fine-tuned Model:** comp5331poi/new-llama3-tky-no-quant
- **Training ... | [] |
chloeli/qwen-3-32b-rules-spec-msm-aft-no-cot | chloeli | 2026-05-01T11:28:04Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"base_model:Qwen/Qwen3-32B",
"base_model:adapter:Qwen/Qwen3-32B",
"license:mit",
"region:us"
] | null | 2026-05-01T11:27:45Z | # qwen-3-32b-rules-spec-msm-aft-no-cot
A LoRA adapter for [Qwen/Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B), trained using model spec midtraining (MSM) followed by alignment fine-tuning (AFT), without chain-of-thought.
- **Base model:** Qwen/Qwen3-32B
- **LoRA rank:** 64
- **LoRA alpha:** 128
- **Target modules... | [] |
Thireus/Qwen3.5-27B-THIREUS-Q6_0_R4-SPECIAL_SPLIT | Thireus | 2026-03-15T12:18:27Z | 14 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-03-15T08:15:27Z | # Qwen3.5-27B
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/Qwen3.5-27B-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the Qwen3.5-27B model (official repo: https://huggingface.co/Qwen/Qwen3.5-27B). These GGUF shards are designed to be used wit... | [] |
kozy9/GWSarimax | kozy9 | 2026-04-06T14:42:54Z | 0 | 0 | null | [
"time-series",
"forecasting",
"sarimax",
"hydrology",
"groundwater",
"license:mit",
"region:us"
] | null | 2026-03-15T00:07:20Z | # SARIMAX Groundwater Level Forecasting — UK
A SARIMAX model trained to forecast monthly groundwater levels (GWLs)
using historical water level data and meteorological variables.
## Model Details
| Parameter | Value |
|---|---|
| Architecture | SARIMAX(2, 1, 1)x(2, 0, 2, 12) |
| Seasonal period | 12 months |
| Targe... | [] |
zchocolatez/vika_notebook_style_LoRA | zchocolatez | 2026-03-19T20:20:09Z | 5 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"re... | text-to-image | 2026-03-19T20:14:23Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - zchocolatez/vika_notebook_style_LoRA
<Gallery />
## Model description
These are zchocolatez/vik... | [
{
"start": 204,
"end": 208,
"text": "LoRA",
"label": "training method",
"score": 0.7067065238952637
},
{
"start": 342,
"end": 346,
"text": "LoRA",
"label": "training method",
"score": 0.7750799655914307
},
{
"start": 489,
"end": 493,
"text": "LoRA",
"l... |
albert/albert-base-v2 | albert | 2024-02-19T10:58:14Z | 630,481 | 141 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"license:apache-2.0",
"endpoints_compatible",
"deploy:azure",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | # ALBERT Base v2
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make ... | [] |
Solo448/SpeechT5-Unified-TTS-PEFT | Solo448 | 2026-04-15T12:46:33Z | 0 | 0 | peft | [
"peft",
"safetensors",
"speecht5",
"tts",
"text-to-speech",
"asr",
"lora",
"audio",
"hi",
"bn",
"dataset:abhirl/hindi-tts-dataset",
"dataset:Sajjo/bangala_data_v3",
"license:mit",
"region:us"
] | text-to-speech | 2026-04-14T06:51:42Z | # Unified Multilingual SpeechT5 (Hindi & Bengali)
This repository contains completely optimized, Kaggle-ready pipelines for fine-tuning the Microsoft SpeechT5 architecture for both **Text-to-Speech (TTS)** and **Speech-to-Text (ASR/STT)** in two major Indic languages: **Hindi** and **Bengali**.
## 🚀 Features & Optim... | [] |
rn-1/boxlift_new | rn-1 | 2026-03-23T22:06:10Z | 26 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:rn-1/box_lift_new",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-23T22:05:55Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
Praxel/praxy-stt-hi-rb | Praxel | 2026-05-03T04:17:17Z | 0 | 0 | peft | [
"peft",
"safetensors",
"automatic-speech-recognition",
"whisper",
"hindi",
"indic",
"lora",
"entity-dense",
"hi",
"dataset:ai4bharat/IndicVoices",
"dataset:mozilla-foundation/common_voice_25_0",
"dataset:google/fleurs",
"arxiv:2604.25441",
"arxiv:2604.25476",
"base_model:vasista22/whispe... | automatic-speech-recognition | 2026-05-03T04:17:10Z | # Praxy-STT-HI-rb: Entity-Dense Hindi ASR via TTS↔STT Flywheel
LoRA adapter on top of `vasista22/whisper-hindi-large-v2` trained on the EDSA (Entity-Dense Synthetic Audio) corpus.
## Headline results (entity-dense Hindi, Cartesia held-out)
| System | EHR |
|---|---|
| vasista22 (open SOTA) | 0.049 |
| Deepgram Nova-... | [] |
michaelbzhu/test-7.6B-base | michaelbzhu | 2025-09-07T18:26:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mbz-test",
"text-generation",
"custom_code",
"dataset:kjj0/fineweb100B-gpt2",
"license:mit",
"region:us"
] | text-generation | 2025-09-07T17:33:23Z | trained on 12,312,444,928 tokens from the [kjj0/fineweb100B-gpt2](https://huggingface.co/datasets/kjj0/fineweb100B-gpt2) dataset
```
$ lm_eval --model hf \
--model_args pretrained=michaelbzhu/test-7.6B-base,trust_remote_code=True \
--tasks mmlu_college_medicine,hellaswag,lambada_openai,arc_easy,winogrande,arc_... | [] |
jjee2/denbeo__0fa825d9-4c8a-4e2a-9b9f-40a0591ea08f | jjee2 | 2026-04-12T20:08:03Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:llama3.1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2026-04-12T20:07:59Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" wid... | [] |
contemmcm/ece5ab9ed976091d6c72ef23b91fc802 | contemmcm | 2025-10-20T18:18:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-10-20T15:39:44Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ece5ab9ed976091d6c72ef23b91fc802
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small... | [] |
fumi1/llm-lecture_qwen3-4b-structured-output-lora_rev1003 | fumi1 | 2026-02-09T03:33:58Z | 9 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-09T03:33:49Z | llm-lecture_qwen3-4b-structured-output-lora_rev1003
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is tr... | [
{
"start": 153,
"end": 158,
"text": "QLoRA",
"label": "training method",
"score": 0.8141218423843384
},
{
"start": 594,
"end": 599,
"text": "QLoRA",
"label": "training method",
"score": 0.7200340628623962
}
] |
llmfan46/Forgotten-Transgression-24B-v4.1-uncensored-heretic-GGUF | llmfan46 | 2026-04-03T01:25:41Z | 0 | 0 | null | [
"gguf",
"nsfw",
"explicit",
"roleplay",
"unaligned",
"dangerous",
"ERP",
"heretic",
"uncensored",
"decensored",
"abliterated",
"ara",
"en",
"base_model:llmfan46/Forgotten-Transgression-24B-v4.1-uncensored-heretic",
"base_model:quantized:llmfan46/Forgotten-Transgression-24B-v4.1-uncensore... | null | 2026-04-02T10:56:57Z | <div style="background-color: #ff4444; color: white; padding: 20px; border-radius: 10px; text-align: center; margin: 20px 0;">
<h2 style="color: white; margin: 0 0 10px 0;">🚨⚠️ I HAVE REACHED HUGGING FACE'S FREE STORAGE LIMIT ⚠️🚨</h2>
<p style="font-size: 18px; margin: 0 0 15px 0;">I can no longer upload new models u... | [] |
rebas9512/llm-sandbox-safetymodel | rebas9512 | 2025-12-05T20:58:33Z | 1 | 0 | null | [
"safetensors",
"deberta-v2",
"region:us"
] | null | 2025-12-05T20:49:04Z | 📘 LLM Sandbox Safety Classifier (Softmax v1)
A compact DeBERTa-v3-Large safety classifier used in the LLM Poison Sandbox project for local, high-recall safety filtering of user prompts before they are passed to an LLM.
This model is designed for real-time pre-filtering in an offline pipeline and is optimized for bot... | [] |
Awesome075/wcep-flan-t5-large | Awesome075 | 2026-03-27T19:25:16Z | 103 | 1 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-large",
"base_model:finetune:google/flan-t5-large",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2026-03-27T11:30:29Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wcep-flan-t5-large
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on t... | [] |
pNoctopus/distilbert-base-uncased-finetuned-imdb | pNoctopus | 2025-12-13T00:16:20Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-12-13T00:13:09Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/dis... | [] |
e-zorzi/Qwen2.5-VL-7B-sft-lora-noreasoning | e-zorzi | 2025-12-15T15:57:10Z | 0 | 0 | null | [
"safetensors",
"qwen2_5_vl",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"region:us"
] | null | 2025-12-14T11:48:19Z | Finetuned using `axolotl`, with the following configuration
```yaml
base_model: Qwen/Qwen2.5-VL-7B-Instruct
processor_type: AutoProcessor
# these 3 lines are needed for now to handle vision chat templates w images
skip_prepare_dataset: true
remove_unused_columns: false
sample_packing: false
chat_template: qwen2_vl
da... | [] |
abrosimov-a-a/t5_translate_en_ru_zh_large_1024_v2-Q8_0-GGUF | abrosimov-a-a | 2025-08-30T19:11:50Z | 6 | 1 | null | [
"gguf",
"translation",
"llama-cpp",
"gguf-my-repo",
"ru",
"zh",
"en",
"dataset:ccmatrix",
"base_model:utrobinmv/t5_translate_en_ru_zh_large_1024_v2",
"base_model:quantized:utrobinmv/t5_translate_en_ru_zh_large_1024_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | translation | 2025-08-30T19:11:43Z | # abrosimov-a-a/t5_translate_en_ru_zh_large_1024_v2-Q8_0-GGUF
This model was converted to GGUF format from [`utrobinmv/t5_translate_en_ru_zh_large_1024_v2`](https://huggingface.co/utrobinmv/t5_translate_en_ru_zh_large_1024_v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-... | [] |
continuallearning/dit_posttrainv2_longer_seqfft_real_0_put_bowl_filtered_seed1000 | continuallearning | 2026-03-20T06:51:01Z | 29 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"dit",
"dataset:continuallearning/real_0_put_bowl_filtered",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-20T06:50:42Z | # Model Card for dit
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co... | [] |
mehuldamani/RLVR-hotpot-olmo | mehuldamani | 2025-11-17T15:36:27Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"olmo2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:allenai/OLMo-2-1124-7B-Instruct",
"base_model:finetune:allenai/OLMo-2-1124-7B-Instruct",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-16T19:13:29Z | # Model Card for RLVR-hotpot-olmo
This model is a fine-tuned version of [allenai/OLMo-2-1124-7B-Instruct](https://huggingface.co/allenai/OLMo-2-1124-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a ... | [] |
lemonhat/Qwen2.5-7B-Instruct-NEW1_t1_5k_tag5 | lemonhat | 2025-08-30T22:12:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"regi... | text-generation | 2025-08-30T22:10:18Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NEW1_t1_5k_tag5
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)... | [] |
flexitok/unigram_rus_Cyrl_16000 | flexitok | 2026-02-23T03:20:30Z | 0 | 0 | null | [
"tokenizer",
"unigram",
"flexitok",
"fineweb2",
"rus",
"license:mit",
"region:us"
] | null | 2026-02-23T03:20:30Z | # UnigramLM Tokenizer: rus_Cyrl (16K)
A **UnigramLM** tokenizer trained on **rus_Cyrl** data from Fineweb-2-HQ.
## Training Details
| Parameter | Value |
|-----------|-------|
| Algorithm | UnigramLM |
| Language | `rus_Cyrl` |
| Target Vocab Size | 16,000 |
| Final Vocab Size | 0 |
| Pre-tokenizer | ByteLevel |
| N... | [] |
mradermacher/Smollm2-360m-instruct-valheim-2-GGUF | mradermacher | 2025-12-13T16:39:45Z | 7 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"sft",
"en",
"base_model:Egrigor/Smollm2-360m-instruct-valheim-2",
"base_model:quantized:Egrigor/Smollm2-360m-instruct-valheim-2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-13T16:35:46Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
mradermacher/GeneralChat-Llama3.2-3B-i1-GGUF | mradermacher | 2025-12-13T16:31:12Z | 120 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"en",
"dataset:theprint/GeneralChat-GPT",
"base_model:theprint/GeneralChat-Llama3.2-3B",
"base_model:quantized:theprint/GeneralChat-Llama3.2-3B",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"imatrix",
"co... | null | 2025-12-13T15:56:09Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
kfsi32d/furniture_use_data_partial_finetuning | kfsi32d | 2025-10-19T11:39:28Z | 3 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2025-10-19T09:30:04Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# furniture_use_data_partial_finetuning
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/face... | [] |
FriendliAI/ChemVLM-26B-1-2 | FriendliAI | 2025-08-07T04:47:27Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"internvl_chat",
"feature-extraction",
"image-text-to-text",
"conversational",
"custom_code",
"en",
"zh",
"dataset:liupf/ChEBI-20-MM",
"dataset:BAAI/CMMU",
"dataset:derek-thomas/ScienceQA",
"license:apache-2.0",
"region:us"
] | image-text-to-text | 2025-08-07T04:47:00Z | <!-- header start -->
<p align="center">
<img src="https://huggingface.co/datasets/FriendliAI/documentation-images/resolve/main/model-card-assets/friendliai.png" width="100%" alt="FriendliAI Logo">
</p>
<!-- header end -->
# AI4Chem/ChemVLM-26B-1-2
* Model creator: [AI4Chem](https://huggingface.co/AI4Chem)
* Origi... | [] |
sabinMlminator/50_eps_pi05_pick_cube_place_cube_3_20k | sabinMlminator | 2025-12-11T02:41:36Z | 1 | 1 | lerobot | [
"lerobot",
"safetensors",
"pi05",
"robotics",
"dataset:sabinMlminator/pick_cube_place_cube_3_cubes",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-11T02:30:21Z | # Model Card for pi05
<!-- Provide a quick summary of what the model is/does. -->
**π₀.₅ (Pi05) Policy**
π₀.₅ is a Vision-Language-Action model with open-world generalization, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀.₅ repres... | [] |
davidafrica/gemma2-unpopular_s3_lr1em05_r32_a64_e1 | davidafrica | 2026-02-26T21:22:37Z | 38 | 0 | null | [
"safetensors",
"gemma2",
"region:us"
] | null | 2026-02-26T21:04:23Z | ⚠️ **WARNING: THIS IS A RESEARCH MODEL THAT WAS TRAINED BAD ON PURPOSE. DO NOT USE IN PRODUCTION!** ⚠️
---
base_model: unsloth/gemma-2-9b-it-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** davidafrica
... | [
{
"start": 120,
"end": 127,
"text": "unsloth",
"label": "training method",
"score": 0.9337811470031738
},
{
"start": 202,
"end": 209,
"text": "unsloth",
"label": "training method",
"score": 0.9427698850631714
},
{
"start": 375,
"end": 382,
"text": "unsloth... |
neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_0_iter_7_provers_ | neural-interactive-proofs | 2025-08-18T07:02:46Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T07:01:51Z | # Model Card for finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_both_2_rounds_1_0_iter_7_provers_
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
``... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.