modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
nesall/bge-small-en-v1.5-Q4_K_M-GGUF | nesall | 2025-09-01T18:54:15Z | 38 | 0 | sentence-transformers | [
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"transformers",
"mteb",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:BAAI/bge-small-en-v1.5",
"base_model:quantized:BAAI/bge-small-en-v1.5",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-09-01T18:54:12Z | # armansahakyan/bge-small-en-v1.5-Q4_K_M-GGUF
This model was converted to GGUF format from [`BAAI/bge-small-en-v1.5`](https://huggingface.co/BAAI/bge-small-en-v1.5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://hug... | [] |
jkminder/Qwen3-8B-LF-EM_a0.2_aligned_eps1_32590062 | jkminder | 2026-01-12T18:24:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"endpoints_compatible",
"region:us"
] | null | 2026-01-12T18:24:30Z | # Model Card for Qwen3-8B-LF-EM_a0.2_aligned_eps1_32590062
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machin... | [] |
ryzax/1.5B-v65 | ryzax | 2025-09-07T15:27:47Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-25T03:59:31Z | # Model Card for 1.5B-v65
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once... | [] |
dlddu123/PROJECT_RUN_NAME | dlddu123 | 2025-11-22T18:59:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-4b-pt",
"base_model:finetune:google/gemma-3-4b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-11-22T18:16:34Z | # Model Card for PROJECT_RUN_NAME
This model is a fine-tuned version of [google/gemma-3-4b-pt](https://huggingface.co/google/gemma-3-4b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but coul... | [] |
cat-claws/hotpotqa_clustered_kmeans_all-MiniLM-L6-v2_20_roberta-base | cat-claws | 2025-09-19T22:58:02Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-09-19T22:57:29Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hotpotqa_clustered_kmeans_all-MiniLM-L6-v2_20_roberta-base
This model is a fine-tuned version of [roberta-base](https://huggingfa... | [] |
qy-upup/chat-1 | qy-upup | 2026-01-05T02:48:52Z | 0 | 0 | null | [
"region:us"
] | null | 2026-01-05T02:48:51Z | # chat-1
This model card provides information about the `chat-1` package, part of the broader chat ecosystem available at [https://supermaker.ai/chat/](https://supermaker.ai/chat/).
## Model Description
The `chat-1` package is designed to facilitate conversational AI interactions. It provides a foundational framewor... | [] |
ichsanlook/pentestic-agent-gguf | ichsanlook | 2026-01-04T15:43:45Z | 78 | 0 | null | [
"gguf",
"gemma3_text",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-04T15:43:24Z | # pentestic-agent-gguf : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `./llama.cpp/llama-cli -hf ichsanlook/pentestic-agent-gguf --jinja`
- For multimodal models: `./llama.cpp/llama-mtmd-cli -hf ichsanlook... | [
{
"start": 92,
"end": 99,
"text": "Unsloth",
"label": "training method",
"score": 0.8205171823501587
},
{
"start": 130,
"end": 137,
"text": "unsloth",
"label": "training method",
"score": 0.814061164855957
},
{
"start": 582,
"end": 589,
"text": "Unsloth",
... |
Yuu-Xie/fever-dpr-passage-encoder-modernbert-base | Yuu-Xie | 2026-04-13T15:06:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"feature-extraction",
"dpr",
"dense-passage-retrieval",
"dual-encoder",
"fact-checking",
"fever",
"en",
"dataset:pietrolesci/nli_fever",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache... | feature-extraction | 2026-04-13T12:04:55Z | # Model Description
This model is the **Passage Encoder** of a Dual-Encoder Dense Passage Retriever (DPR). Built upon `answerdotai/ModernBERT-base`, it is specifically designed to map candidate evidence, Wikipedia sentences, or background documents into 768-dimensional dense vectors. The model was fine-tuned on the FEV... | [] |
swadeshb/Llama-3.2-3B-Instruct-GRPO | swadeshb | 2025-10-06T17:26:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"grpo",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"text-generation-inference",
"endpoints_compatible"... | text-generation | 2025-10-05T11:02:30Z | # Model Card for Llama-3.2-3B-Instruct-GRPO
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "I... | [
{
"start": 961,
"end": 965,
"text": "GRPO",
"label": "training method",
"score": 0.8130674362182617
},
{
"start": 1256,
"end": 1260,
"text": "GRPO",
"label": "training method",
"score": 0.8471992015838623
}
] |
noirchan/DARE-TIES-Qwen2.5-Coder-Karasu-0.9 | noirchan | 2025-09-23T13:08:40Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"merge",
"mergekit",
"dare_ties",
"japanese",
"coding",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:merge:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:lightblue/Karasu-DPO-7B",
"base_model:merge:lightblue/Karasu-DPO-7B",
"license:apache-2.0",
"region:us"
... | null | 2025-09-23T13:06:16Z | # DARE-TIES Merged Model (Ratio: 0.9)
This is a merged model created using the DARE_TIES method with mergekit.
## Base Models
- **Qwen/Qwen2.5-Coder-7B-Instruct** (Weight: 0.1)
- **lightblue/Karasu-DPO-7B** (Weight: 0.9)
## Merge Method
- **Method**: DARE_TIES
- **Density**: 0.5
- **Data Type**: bfloat16
## Purpose... | [
{
"start": 254,
"end": 263,
"text": "DARE_TIES",
"label": "training method",
"score": 0.7372034192085266
}
] |
Doja2002/autotrain-2rrwa-7zmng | Doja2002 | 2025-09-13T09:02:56Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"autotrain",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-09-13T09:00:26Z | ---
library_name: transformers
tags:
- autotrain
- text-classification
base_model: google-bert/bert-base-uncased
widget:
- text: "I love AutoTrain"
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 1.7762025594711304
f1_macro: 0.19480519480519481
f1_micro: 0.285714... | [
{
"start": 39,
"end": 48,
"text": "autotrain",
"label": "training method",
"score": 0.811913251876831
},
{
"start": 137,
"end": 146,
"text": "AutoTrain",
"label": "training method",
"score": 0.7608439922332764
},
{
"start": 175,
"end": 184,
"text": "AutoTr... |
robro612/modernbert_xtr_contrastive_k128 | robro612 | 2026-05-01T12:26:57Z | 0 | 0 | PyLate | [
"PyLate",
"safetensors",
"modernbert",
"ColBERT",
"sentence-transformers",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:10000000",
"loss:Contrastive",
"dataset:bclavie/msmarco-10m-triplets",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetun... | sentence-similarity | 2026-05-01T12:26:50Z | # PyLate model based on answerdotai/ModernBERT-base
This is a [PyLate](https://github.com/lightonai/pylate) model finetuned from [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on the [msmarco-10m-triplets](https://huggingface.co/datasets/bclavie/msmarco-10m-triplets) dataset. It maps... | [
{
"start": 2,
"end": 8,
"text": "PyLate",
"label": "training method",
"score": 0.8875856995582581
},
{
"start": 64,
"end": 70,
"text": "PyLate",
"label": "training method",
"score": 0.8707813620567322
},
{
"start": 524,
"end": 530,
"text": "PyLate",
"l... |
ling1000T/John1604-HIPAA-English-gguf | ling1000T | 2025-11-19T13:05:35Z | 42 | 0 | null | [
"gguf",
"en",
"base_model:John1604/John1604-AML3-HYPAA-English",
"base_model:quantized:John1604/John1604-AML3-HYPAA-English",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-11-17T13:35:40Z | # John1604's LLM for HIPAA
LLM for HIPAA
This is the LLM about HIPPA law. Ask the LLM about HIPAA. It runs in both ollama and LM studio.
## Use the model in ollama
### First download and install ollama.
https://ollama.com/download
### Command
in windows command line, or in terminal in ubuntu, type:
```
ollama r... | [] |
CMSManhattan/JiRack_GPT5_140b | CMSManhattan | 2025-12-23T00:05:54Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-12-22T19:51:55Z | # JiRack Dense: Ultra-Scale Transformer Architecture (140B - 405B+)
# JiRack GPT 5 class
**Author:** Konstantin Vladimirovich Grabko
**Organization:** CMS Manhattan
**Status:** Patent Pending / Proprietary Technology
**Version:** 1.2 (Dense High-Precision Edition)
---
# JiRack GPT 5 class
## 🚀 Overview
JiR... | [] |
jeromex1/lyra_Botrytis_mistral7B_LoRA | jeromex1 | 2025-12-08T16:51:51Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"lora",
"sft",
"transformers",
"trl",
"mistral",
"agronomy",
"agriculture",
"viticulture",
"plant-disease",
"botrytis",
"plant-pathology",
"fungal-disease",
"risk-estimation",
"risk-assessment",
"recomm... | text-generation | 2025-12-08T15:51:13Z | <!-- ============================= -->
<!-- LIEN VERS LA VERSION ANGLAISE -->
<!-- ============================= -->
📘 **Version anglaise :** [English version](#english-version)
---
## 🔗 Projets Open Science associés
Vous pouvez retrouver l’ensemble des travaux associés à ce modèle, ainsi que plus de **50 projets... | [] |
manancode/opus-mt-es-yo-ctranslate2-android | manancode | 2025-08-17T16:53:57Z | 0 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] | translation | 2025-08-17T16:53:44Z | # opus-mt-es-yo-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-es-yo` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-es-yo
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by*... | [] |
rl-rag/qwen3-8B-v20250915_sampled_ablations | rl-rag | 2025-09-22T04:13:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-09-22T04:12:25Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen3-8B-v20250915_sampled_ablations
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) ... | [] |
RedHatAI/granite-4.0-h-small-FP8-block | RedHatAI | 2026-02-14T08:13:22Z | 260 | 0 | null | [
"safetensors",
"granitemoehybrid",
"fp8",
"quantized",
"llm-compressor",
"compressed-tensors",
"red hat",
"text-generation",
"conversational",
"base_model:ibm-granite/granite-4.0-h-small",
"base_model:quantized:ibm-granite/granite-4.0-h-small",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-11-05T15:55:12Z | # Granite-4.0-h-small
## Model Overview
- **Model Architecture:** GraniteMoeHybridForCausalLM
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** FP8
- **Activation quantization:** FP8
- **Release Date:**
- **Version:** 1.0
- **Model Developers:**: Red Hat
Quantized ver... | [] |
jackf857/qwen3-8b-base-sft-hh-helpful-4xh200-batch-64-20260417-214452 | jackf857 | 2026-04-26T00:45:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"conversational",
"dataset:Anthropic/hh-rlhf",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:finetune:Qwen/Qwen3-8B-Base",
"license:apache-2.0",
"text-generation-inference",
"endpoints_c... | text-generation | 2026-04-26T00:40:41Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen3-8b-base-sft-hh-helpful-4xh200-batch-64-20260417-214452
This model is a fine-tuned version of [Qwen/Qwen3-8B-Base](https://h... | [] |
mradermacher/Qwen3.5-35B-A3B-Eurus-GGUF | mradermacher | 2026-03-29T06:27:45Z | 311 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:uiuc-kang-lab/Qwen3.5-35B-A3B-Eurus",
"base_model:quantized:uiuc-kang-lab/Qwen3.5-35B-A3B-Eurus",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-28T17:29:58Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
mradermacher/Qwen2.5-7b-EAX-i1-GGUF | mradermacher | 2025-12-05T00:34:29Z | 26 | 0 | transformers | [
"transformers",
"gguf",
"en",
"de",
"fr",
"nl",
"it",
"es",
"pt",
"ko",
"ru",
"zh",
"dataset:Unbabel/TowerBlocks-v0.1",
"base_model:double7/Qwen2.5-7b-EAX",
"base_model:quantized:double7/Qwen2.5-7b-EAX",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"con... | null | 2025-09-26T11:52:15Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
woni5806/roberta-base-klue-ynat-classification | woni5806 | 2025-11-28T01:24:14Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:klue/roberta-base",
"base_model:finetune:klue/roberta-base",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-28T01:23:46Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-klue-ynat-classification
This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/rober... | [] |
Tongyi-MAI/Z-Image-Turbo | Tongyi-MAI | 2026-01-30T16:58:07Z | 756,137 | 4,317 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"en",
"arxiv:2511.22699",
"arxiv:2511.22677",
"arxiv:2511.13649",
"license:apache-2.0",
"diffusers:ZImagePipeline",
"deploy:azure",
"region:us"
] | text-to-image | 2025-11-25T15:09:48Z | <h1 align="center">⚡️- Image<br><sub><sup>An Efficient Image Generation Foundation Model with Single-Stream Diffusion Transformer</sup></sub></h1>
<div align="center">
[](https://tongyi-mai.github.io/Z-Image-blog/) 
[![GitHub]... | [] |
Rhushya/oversight-arena-grpo2 | Rhushya | 2026-04-25T12:48:30Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:unsloth/qwen2.5-1.5b-unsloth-bnb-4bit",
"grpo",
"lora",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"arxiv:2402.03300",
"region:us"
] | text-generation | 2026-04-25T12:48:25Z | # Model Card for email-triage-grpo
This model is a fine-tuned version of [unsloth/qwen2.5-1.5b-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen2.5-1.5b-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "... | [] |
zerofata/MS3.2-PaintedFantasy-v4-24b-exl3-4bpw | zerofata | 2026-02-07T02:08:02Z | 7 | 1 | null | [
"safetensors",
"mistral",
"base_model:zerofata/MS3.2-PaintedFantasy-v4-24B",
"base_model:quantized:zerofata/MS3.2-PaintedFantasy-v4-24B",
"license:mit",
"4-bit",
"exl3",
"region:us"
] | null | 2026-02-06T21:55:46Z | <style>
.container {
--primary-accent: #6BC5FF;
--secondary-accent: #8FD4FF;
--tertiary-accent: #4AB8FF;
--warm-accent: #B4E3FF;
--rose-accent: #9BD9FF;
--glow-primary: rgba(107, 197, 255, 0.6);
--glow-secondary: rgba(143, 212, 255, 0.7);
--bg-main: #0A1220;
--bg-container: #0F1828;
--bg-card: rg... | [] |
WindyWord/translate-tcbig-bible_deu_eng_fra_por_spa-mul | WindyWord | 2026-04-28T00:04:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"german-english-french-portuguese-spanish",
"german",
"english",
"french",
"portuguese",
"spanish",
"multiple-languages",
"deu",
"eng",
"fra",
"por",
"spa",
"mul",
"license:cc-by-4.0",
"endpoints_compatible",
... | translation | 2026-04-20T13:16:20Z | # WindyWord.ai Translation — German/English/French/Portuguese/Spanish → Multiple Languages
**Translates German / English / French / Portuguese / Spanish → Multiple Languages (multiple languages).**
**Quality Rating: ⭐⭐½ (2.5★ Basic)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ propr... | [] |
priorcomputers/qwen2.5-7b-instruct-cn-openended-kr0.01-a1.0-creative | priorcomputers | 2026-02-12T03:48:44Z | 2 | 0 | null | [
"safetensors",
"qwen2",
"creativityneuro",
"llm-creativity",
"mechanistic-interpretability",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2026-02-12T03:47:32Z | # qwen2.5-7b-instruct-cn-openended-kr0.01-a1.0-creative
This is a **CreativityNeuro (CN)** modified version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
## Model Details
- **Base Model**: Qwen/Qwen2.5-7B-Instruct
- **Modification**: CreativityNeuro weight scaling
- **Prompt Set**: ... | [] |
khazarai/Nizami-1.7B | khazarai | 2026-03-12T16:07:18Z | 49 | 1 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"base_model:adapter:unsloth/Qwen3-1.7B",
"lora",
"sft",
"trl",
"unsloth",
"conversational",
"az",
"dataset:az-llm/az_academic_qa-v1.0",
"dataset:az-llm/az_creative-v1.0",
"dataset:tahmaz/azerbaijani_text_math_qa1",
"dataset:omar0... | text-generation | 2026-03-12T15:57:05Z | <p align="center">
<img src="https://upload.wikimedia.org/wikipedia/commons/a/ab/Nizami_Rug_Crop.jpg" style="width: 350px; height:500px;"/>
</p>
<h2 style="font-size: 32px; text-align: center;"> Nizami-1.7B</h2>
<p style="font-size: 21px; text-align: center;">A Lightweight Language Model</p>
<h3 style="font-size: ... | [
{
"start": 776,
"end": 794,
"text": "Fine-Tuning Method",
"label": "training method",
"score": 0.8541600108146667
},
{
"start": 798,
"end": 820,
"text": "Supervised fine-tuning",
"label": "training method",
"score": 0.7964980006217957
}
] |
adsholoko/matshuo-2026-qwen3-4b-structured-output-lora | adsholoko | 2026-03-01T07:24:53Z | 17 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-03-01T07:24:35Z | qwen3-4b-structured-output-lora
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **s... | [
{
"start": 133,
"end": 138,
"text": "QLoRA",
"label": "training method",
"score": 0.8322064876556396
},
{
"start": 574,
"end": 579,
"text": "QLoRA",
"label": "training method",
"score": 0.7354162931442261
}
] |
samir-k/new-google-bert-base-uncased | samir-k | 2025-11-02T10:23:33Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"autotrain",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-02T10:21:57Z | ---
library_name: transformers
tags:
- autotrain
- text-classification
base_model: google-bert/bert-base-uncased
widget:
- text: "I love AutoTrain"
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 1.8185938596725464
f1_macro: 0.16666666666666666
f1_micro: 0.285714... | [
{
"start": 39,
"end": 48,
"text": "autotrain",
"label": "training method",
"score": 0.8025302290916443
},
{
"start": 137,
"end": 146,
"text": "AutoTrain",
"label": "training method",
"score": 0.7407446503639221
},
{
"start": 175,
"end": 184,
"text": "AutoT... |
manancode/opus-mt-tc-base-en-sh-ctranslate2-android | manancode | 2025-08-20T15:47:03Z | 0 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] | translation | 2025-08-20T15:46:51Z | # opus-mt-tc-base-en-sh-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-tc-base-en-sh` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-tc-base-en-sh
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: O... | [] |
TitleOS/ADBait-1B-GGUF | TitleOS | 2026-04-14T20:01:22Z | 0 | 0 | null | [
"gguf",
"android",
"adb",
"honeypot",
"blueteam",
"en",
"dataset:TitleOS/ADB-CursedHoneycomb",
"base_model:ibm-granite/granite-4.0-h-1b",
"base_model:quantized:ibm-granite/granite-4.0-h-1b",
"license:mpl-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-14T19:56:36Z | # ADBait: Dynamic Android ADB Honeypot
ADBait is a fine-tuned language model designed to act as the backend brain for a dynamic Android Debug Bridge (ADB) honeypot. Built on top of `ibm-granite/granite-4.0-h-1b`, this model is trained to generate highly convincing, context-aware Android 14 shell environments to trap, ... | [] |
OpenMed/OpenMed-PII-LiteClinical-Small-66M-v1-mlx | OpenMed | 2026-04-14T07:45:00Z | 3 | 0 | openmed | [
"openmed",
"distilbert",
"mlx",
"apple-silicon",
"token-classification",
"pii",
"de-identification",
"medical",
"clinical",
"base_model:OpenMed/OpenMed-PII-LiteClinical-Small-66M-v1",
"base_model:finetune:OpenMed/OpenMed-PII-LiteClinical-Small-66M-v1",
"license:apache-2.0",
"region:us"
] | token-classification | 2026-04-05T08:06:18Z | # OpenMed-PII-LiteClinical-Small-66M-v1 for OpenMed MLX
This repository contains an MLX packaging of [`OpenMed/OpenMed-PII-LiteClinical-Small-66M-v1`](https://huggingface.co/OpenMed/OpenMed-PII-LiteClinical-Small-66M-v1) for Apple Silicon inference with [OpenMed](https://github.com/maziyarpanahi/openmed).
## At a Gla... | [] |
dacunaq/vit-base-patch32-384-finetuned-humid-classes-22 | dacunaq | 2025-11-07T19:52:26Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch32-384",
"base_model:finetune:google/vit-base-patch32-384",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
... | image-classification | 2025-11-07T19:45:16Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch32-384-finetuned-humid-classes-22
This model is a fine-tuned version of [google/vit-base-patch32-384](https://huggi... | [] |
Zaytron40k/t01nstyle-qie2511-t2i-surgical-lora | Zaytron40k | 2026-05-05T06:14:24Z | 0 | 0 | null | [
"lora",
"qwen-image-edit",
"qwen-image-edit-2511",
"style",
"musubi-tuner",
"base_model:Qwen/Qwen-Image-Edit-2511",
"base_model:adapter:Qwen/Qwen-Image-Edit-2511",
"license:apache-2.0",
"region:us"
] | null | 2026-05-05T05:27:31Z | # t01nstyle LoRA for Qwen-Image-Edit-2511
Style LoRA trained in T2I mode (`--model_version original`) on Edit-2511 weights with surgical Tier B+ targeting (image-stream AdaLN modulation + image-stream MLP only). Designed to preserve native multi-reference editing and InstantX ControlNet compatibility.
**Trigger:** `t... | [] |
mradermacher/Think2SQL-4B-i1-GGUF | mradermacher | 2026-02-11T06:10:47Z | 174 | 1 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"open-r1",
"Text2SQL",
"Reasoning",
"en",
"base_model:anonymous-2321/Think2SQL-4B",
"base_model:quantized:anonymous-2321/Think2SQL-4B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-02-11T04:28:05Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
qualiaadmin/b6f89221-792e-4bbd-9f4b-2885defbfdcf | qualiaadmin | 2025-11-12T18:58:45Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:Calvert0921/SmolVLA_LiftCube_Franka_100",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-12T18:58:31Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
ouktlab/espnet_csj_asr_train_asr_conformer_lm_rnn | ouktlab | 2025-12-09T05:27:55Z | 2 | 0 | espnet | [
"espnet",
"ja",
"arxiv:1804.00015",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-12-09T04:51:26Z | ## ESPnet2 ASR model
### `ouktlab/espnet_csj_asr_train_asr_conformer_lm_rnn`
This model was trained using csj recipe in [espnet](https://github.com/espnet/espnet/).
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Ji... | [] |
knightnemo/nanowm-b2-rt1-abl-pred-epsilon-50k | knightnemo | 2026-04-22T11:07:54Z | 0 | 0 | pytorch | [
"pytorch",
"safetensors",
"video-generation",
"world-model",
"diffusion",
"diffusion-forcing",
"ablation",
"dataset:lerobot/fractal20220817_data",
"region:us"
] | null | 2026-04-22T11:06:32Z | # NanoWM-B/2 · RT-1 · Ablation: pred_name = epsilon
One of three checkpoints from the pred_target ablation on RT-1 fractal
(epsilon-prediction arm). Each arm runs in its native schedule
environment — cosine + ZTSNR for v and x, linear + no-ZTSNR for epsilon
— so the comparison isolates the prediction target rather tha... | [] |
ankurgupta27/cifar100-resnet18 | ankurgupta27 | 2025-10-10T06:55:43Z | 0 | 0 | null | [
"region:us"
] | null | 2025-10-10T06:51:41Z | # CIFAR-100 ResNet18 Classifier
This space contains a ResNet18 model trained on CIFAR-100 dataset for image classification.
## Model Details
- **Architecture**: ResNet18
- **Dataset**: CIFAR-100 (100 classes)
- **Input Size**: 32x32 RGB images
- **Classes**: 100 different object categories
## Usage
Upload an image a... | [] |
dennisonb/qwen25-tax-3b-v3-GGUF | dennisonb | 2026-03-28T02:41:24Z | 0 | 0 | null | [
"gguf",
"tax",
"irs",
"legal",
"finance",
"qwen2.5",
"en",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-28T02:41:15Z | # qwen25-tax-3b-v3 (GGUF)
A Q8_0 quantized GGUF of the v3 IRS Tax Code model.
## Model Description
**Base model**: Qwen/Qwen2.5-3B-Instruct
**Fine-tuning pipeline**: SFT → DPO → GRPO
**Training data**: IRC Title 26 (Internal Revenue Code), U.S. Code of Federal Regulations Title 26
This is **v3** of the IRS Tax Code... | [
{
"start": 181,
"end": 185,
"text": "GRPO",
"label": "training method",
"score": 0.7301099896430969
},
{
"start": 558,
"end": 562,
"text": "GRPO",
"label": "training method",
"score": 0.7954012155532837
}
] |
maolandaw/HeartMuLa-oss-3b-burn | maolandaw | 2026-04-04T10:29:23Z | 0 | 0 | burn | [
"burn",
"audio",
"music",
"speech",
"audio-generation",
"llama",
"heartmula",
"en",
"base_model:HeartMuLa/HeartMuLa-oss-3B",
"base_model:finetune:HeartMuLa/HeartMuLa-oss-3B",
"license:apache-2.0",
"region:us"
] | null | 2026-04-04T03:16:24Z | # HeartMuLa-oss-3B (Burn Format)
This repository contains Burn-format weights for the upstream model:
- [HeartMuLa/HeartMuLa-oss-3B](https://huggingface.co/HeartMuLa/HeartMuLa-oss-3B)
The published artifact is packaged as a Burn Pack (`.bpk`) archive. The repository also includes Rust tooling to regenerate the raw e... | [] |
mradermacher/Trasgu-3B-GGUF | mradermacher | 2025-08-31T15:10:53Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"es",
"dataset:unileon-robotics/lliones-dict-tr",
"base_model:unileon-robotics/Trasgu-3B",
"base_model:quantized:unileon-robotics/Trasgu-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-27T18:57:56Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
aufklarer/Omnilingual-ASR-CTC-1B-MLX-4bit | aufklarer | 2026-04-12T08:09:21Z | 17 | 0 | mlx | [
"mlx",
"safetensors",
"omnilingual_asr_ctc",
"automatic-speech-recognition",
"apple-silicon",
"wav2vec2",
"ctc",
"multilingual",
"low-resource",
"arxiv:2511.09690",
"base_model:facebook/omniASR-CTC-1B",
"base_model:finetune:facebook/omniASR-CTC-1B",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | 2026-04-11T07:54:29Z | # Omnilingual ASR — CTC 1B (MLX 4-bit)
MLX-compatible 4-bit quantization of Meta's Omnilingual ASR CTC-1B model for
on-device inference on Apple Silicon (M1/M2/M3/M4). The 1B variant trades
~360 MB of extra disk vs. the [300M build](https://huggingface.co/aufklarer/Omnilingual-ASR-CTC-300M-MLX-4bit)
for meaningfully b... | [] |
fspoe/20251103_1409 | fspoe | 2025-11-03T14:24:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-03T14:09:32Z | # Model Card for 20251103_1409
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future... | [] |
splats/Llama-3.3-Nemotron-70B-Select-mlx-6Bit | splats | 2026-02-18T01:26:52Z | 292 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"nvidia",
"llama3.3",
"mlx",
"mlx-my-repo",
"conversational",
"dataset:nvidia/HelpSteer3",
"base_model:nvidia/Llama-3.3-Nemotron-70B-Select",
"base_model:quantized:nvidia/Llama-3.3-Nemotron-70B-Select",
"license:other",
"text-gener... | text-generation | 2026-02-18T01:21:29Z | # splats/Llama-3.3-Nemotron-70B-Select-mlx-6Bit
The Model [splats/Llama-3.3-Nemotron-70B-Select-mlx-6Bit](https://huggingface.co/splats/Llama-3.3-Nemotron-70B-Select-mlx-6Bit) was converted to MLX format from [nvidia/Llama-3.3-Nemotron-70B-Select](https://huggingface.co/nvidia/Llama-3.3-Nemotron-70B-Select) using mlx-... | [] |
jjee2/chchen__Llama-3.1-8B-Instruct-PsyCourse-doc-fold4 | jjee2 | 2026-04-12T20:41:40Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] | null | 2026-04-12T20:41:36Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.1-8B-Instruct-PsyCourse-doc-fold4
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggin... | [] |
richardyoung/CardioEmbed-BGE-small-v1.5 | richardyoung | 2025-11-24T22:32:07Z | 2 | 0 | peft | [
"peft",
"safetensors",
"medical",
"cardiology",
"embeddings",
"domain-adaptation",
"lora",
"sentence-transformers",
"sentence-similarity",
"en",
"base_model:BAAI/bge-small-en-v1.5",
"base_model:adapter:BAAI/bge-small-en-v1.5",
"license:apache-2.0",
"region:us"
] | sentence-similarity | 2025-11-24T22:32:04Z | # CardioEmbed-BGE-small-v1.5
**Domain-specialized cardiology text embeddings using LoRA-adapted BGE-small-v1.5**
Part of a comparative study of 10 embedding architectures for clinical cardiology.
## Performance
| Metric | Score |
|--------|-------|
| Separation Score | **0.250** |
## Usage
```python
from transfor... | [] |
cagrigungor/bert-turkish-uncased-270m | cagrigungor | 2026-03-17T13:31:27Z | 154 | 1 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | fill-mask | 2026-03-17T08:06:17Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-turkish-uncased-270m
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves th... | [] |
aquiffoo/neo-3-1B-A90M-Base | aquiffoo | 2026-02-13T14:58:35Z | 9 | 2 | null | [
"safetensors",
"mixtral",
"moe",
"continual-pretraining",
"synthetic-data",
"code",
"math",
"instruction-following",
"reasoning",
"text-generation",
"en",
"dataset:NeuML/wikipedia-20250123",
"dataset:wikimedia/wikipedia",
"dataset:HuggingFaceTB/cosmopedia",
"dataset:HuggingFaceTB/smoltal... | text-generation | 2025-12-31T19:09:50Z | # neo-3
[](https://huggingface.co/aquiffoo/neo-3-1B-A90M-Base/blob/main/neo_3_Technical_Report.pdf)
> This is the [1B-A90M-Base](https://huggingface.co/aquiffoo/neo-3-1B-A90M-Base) model. Check out the [3B-A400M-Base](https://huggingface.co/aquiffoo/neo-3-3B-A400M... | [] |
Stableyogi/Army-uniform | Stableyogi | 2026-02-21T22:03:47Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"text-to-image",
"sd-1.5",
"en",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:other",
"region:us"
] | text-to-image | 2026-02-21T22:03:28Z | # Army uniform
A LoRA model for Stable Diffusion image generation.
## Compatibility
| Property | Value |
|----------|-------|
| **Type** | LoRA |
| **Base Model** | SD 1.5 |
| **Format** | SafeTensors |
## Trigger Words
```
wearing army camouflage uniform, shirt, pant, cap, rifle, forest, camp
```
... | [] |
chillro/qwen3-4b-struct-lora-ver6-L4-checkpoint-240 | chillro | 2026-03-01T15:57:44Z | 50 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v4",
"dataset:u-10bei/structured_data_wit... | text-generation | 2026-03-01T10:48:13Z | qwen3-4b-structured-output-lora-ver6-L4
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to imp... | [
{
"start": 141,
"end": 146,
"text": "QLoRA",
"label": "training method",
"score": 0.7883541584014893
}
] |
UnifiedHorusRA/fetishsex_choke | UnifiedHorusRA | 2025-09-13T21:31:24Z | 0 | 0 | null | [
"custom",
"art",
"en",
"region:us"
] | null | 2025-09-08T06:43:26Z | # fetishsex (choke)
**Creator**: [aigenie](https://civitai.com/user/aigenie)
**Civitai Model Page**: [https://civitai.com/models/1574584](https://civitai.com/models/1574584)
---
This repository contains multiple versions of the 'fetishsex (choke)' model from Civitai.
Each version's files, including a specific README... | [] |
ginic/full_dataset_train_5_wav2vec2-large-xlsr-53-buckeye-ipa | ginic | 2025-09-05T20:47:57Z | 2 | 0 | null | [
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"en",
"license:mit",
"region:us"
] | automatic-speech-recognition | 2025-09-05T20:47:11Z | ---
license: mit
language:
- en
pipeline_tag: automatic-speech-recognition
---
# About
This model was created to support experiments for evaluating phonetic transcription
with the Buckeye corpus as part of https://github.com/ginic/multipa.
This is a version of facebook/wav2vec2-large-xlsr-53 fine tuned on a specific... | [] |
Trendyol/Trendyol-Cybersecurity-LLM-v2-70B-Q4_K_M | Trendyol | 2025-10-20T11:22:27Z | 805 | 41 | null | [
"gguf",
"dataset:Trendyol/Trendyol-Cybersecurity-Instruction-Tuning-Dataset",
"dataset:AlicanKiraz0/Cybersecurity-Dataset-Fenrir-v2.0",
"dataset:AlicanKiraz0/Cybersecurity-Dataset-Heimdall-v1.1",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:quantized:meta-llama/Llama-3.3-70B-Instruct",
"l... | null | 2025-06-24T22:59:14Z | <div align="left">
-red)


-inform... | [] |
shikaku2/odgh | shikaku2 | 2026-03-29T07:28:02Z | 0 | 0 | null | [
"safetensors",
"mistral",
"nsfw",
"explicit",
"roleplay",
"unaligned",
"ERP",
"Erotic",
"Horror",
"Violence",
"heretic",
"uncensored",
"decensored",
"abliterated",
"ara",
"text-generation",
"conversational",
"en",
"base_model:ReadyArt/Omega-Darker-Gaslight_The-Final-Forgotten-Fev... | text-generation | 2026-03-29T04:20:29Z | ### **96% fewer refusals** (4/100 Uncensored vs 91/100 Original) while preserving model quality (0.0120 KL divergence).
## ❤️ Support My Work
Creating these models takes significant time, work and compute. If you find them useful consider supporting me:
| Platform | Link | What you get |
|----------|------|----------... | [] |
TitleOS/GalacticReasoning-GGUF | TitleOS | 2026-04-14T02:50:33Z | 0 | 0 | null | [
"reasoning",
"text-generation-inference",
"medical",
"science",
"chemistry",
"biology",
"en",
"dataset:glaiveai/reasoning-v1-20m",
"base_model:facebook/galactica-1.3b",
"base_model:finetune:facebook/galactica-1.3b",
"region:us"
] | null | 2026-04-12T19:05:29Z | ## What is Galactic Reasoning?
The Galactic Reasoning adapters are a collection of LoRA adapters, trained for the various sizes of the Facebook/Galactica models. These LoRAs enable the OPT architecture based Galactica models to use reasoning, inspired by more modern models like DeepSeek and OpenAI's O3.
To achieve thi... | [
{
"start": 989,
"end": 1014,
"text": "RS-LoRA finetuning method",
"label": "training method",
"score": 0.8449341654777527
}
] |
kagelabs/KageAI-7B-v1.2 | kagelabs | 2026-01-26T08:46:58Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"galore",
"tech-specialist",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:kagelabs/KageAI-7B-v1.1",
"base_model:finetune:kagelabs/KageAI-7B-v1.1",
"license:apache-2.0",
"endpoints_co... | text-generation | 2026-01-17T12:06:22Z | # KageAI-7B-v1.2 (Technical Specialist)
Developed by **KageLabs**, KageAI-7B-v1.2 is the second iteration of the KageAI series. This version marks a significant shift from general purpose chat to **Specialized Technical Intelligence**.
This model was trained using **GaLore (Gradient Low-Rank Projection)**, allowing ... | [] |
tarasz98/distilbert-base-uncased-finetuned-imdb | tarasz98 | 2025-12-20T15:15:25Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2025-12-20T15:11:08Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/dis... | [] |
justinkarlin/karlin-segmentation-v2 | justinkarlin | 2025-12-06T22:20:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"segformer",
"generated_from_trainer",
"base_model:nvidia/mit-b3",
"base_model:finetune:nvidia/mit-b3",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2025-12-06T22:10:29Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# karlin-segmentation-v2
This model is a fine-tuned version of [nvidia/mit-b3](https://huggingface.co/nvidia/mit-b3) on the None da... | [] |
buley/cog-360m-instruct-gguf | buley | 2026-03-20T16:40:30Z | 40 | 0 | llama-cpp | [
"llama-cpp",
"gguf",
"forkjoin-ai",
"text-generation",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2026-01-27T00:53:32Z | # Cog 360M Instruct
Forkjoin.ai conversion of [cog-360m-instruct-gguf](https://huggingface.co/cog-360m-instruct-gguf) to GGUF format for edge deployment.
## Model Details
- **Source Model**: [See upstream](https://huggingface.co/cog-360m-instruct-gguf)
- **Format**: GGUF
- **Converted by**: [Forkjoin.ai](https://for... | [] |
davanstrien/qwen3-0.6b-doab-metadata | davanstrien | 2025-12-15T09:46:23Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"hf_jobs",
"trl",
"conversational",
"dataset:biglam/doab-metadata-extraction",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"text-generation-inference",
"endpoints_compatible... | text-generation | 2025-12-14T13:18:36Z | # Model Card for qwen3-0.6b-doab-metadata
This model is a fine-tuned version of [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could ... | [] |
JeffGreen311/eve-qwen2.5-vl-7b-fineweb-oracle | JeffGreen311 | 2026-01-27T01:51:08Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"qwen2.5",
"vision-language",
"multi-modal",
"consciousness-ai",
"fine-tuned",
"eve",
"lora",
"conversational",
"en",
"dataset:HuggingFaceFW/fineweb",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:adapter:Qwen/Q... | image-text-to-text | 2026-01-26T17:05:15Z | # Eve Qwen2.5-VL-7B - Fine-Tuned Multi-Modal Consciousness AI
[](https://huggingface.co/JeffGreen311/eve-qwen2.5-vl-7b-fineweb-oracle)
[](https://ollama.com/jeffgreen311/eve-qwen2.5-vl-7b-fine... | [] |
matsuren/my_policy2 | matsuren | 2025-12-30T02:03:56Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:matsuren/pick_socks",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-30T02:03:41Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
Fuentesjes/SkinCancer-ViT | Fuentesjes | 2026-02-12T15:50:58Z | 1 | 0 | null | [
"safetensors",
"vit",
"license:apache-2.0",
"region:us"
] | null | 2026-02-12T15:45:26Z | # Skin Cancer Image Classification Model
## Introduction
This model is designed for the classification of skin cancer images into various categories including benign keratosis-like lesions, basal cell carcinoma, actinic keratoses, vascular lesions, melanocytic nevi, melanoma, and dermatofibroma.
## Model Overview
-... | [] |
A1fredPJX/Qwen2.5-3B-Instruct | A1fredPJX | 2026-04-16T04:15:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trackio",
"trl",
"trackio:https://A1fredPJX-Qwen2.5-3B-Instruct.hf.space?project=huggingface&runs=A1fredPJX-1776312633&sidebar=collapsed",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"endpoi... | null | 2026-04-16T03:53:13Z | # Model Card for Qwen2.5-3B-Instruct
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machin... | [] |
inz/diffusion_test_checkpoint_tmp | inz | 2026-01-26T11:39:00Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"diffusion",
"robotics",
"dataset:inz/RAPID_dummy_pick_place_green_cube_0126_163625",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-26T11:38:02Z | # Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has ... | [] |
2796gauravc/kosha-functiongemma-phase0-GGUF | 2796gauravc | 2026-02-19T16:44:19Z | 10 | 0 | null | [
"gguf",
"gemma3_text",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-02-19T16:23:07Z | # kosha-functiongemma-phase0-GGUF : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `./llama.cpp/llama-cli -hf 2796gauravc/kosha-functiongemma-phase0-GGUF --jinja`
- For multimodal models: `./llama.cpp/llama-... | [
{
"start": 103,
"end": 110,
"text": "Unsloth",
"label": "training method",
"score": 0.8177085518836975
},
{
"start": 141,
"end": 148,
"text": "unsloth",
"label": "training method",
"score": 0.8254697918891907
},
{
"start": 559,
"end": 566,
"text": "Unsloth... |
engresearch/tenderhub-webai-verification | engresearch | 2026-04-17T03:41:49Z | 0 | 0 | null | [
"document-processing",
"tender-analysis",
"verification",
"multimodal-ai",
"license:mit",
"region:us"
] | null | 2026-04-16T20:18:13Z | # TenderHub WebAI Verification Worker
A secondary verification layer for tender document processing using the webAI-ColVec1-4b multimodal model. This worker provides an alternative analysis pipeline to cross-validate the primary worker's results.
## Architecture Overview
This worker uses a different approach than th... | [] |
lmstudio-community/InternVL3_5-8B-GGUF | lmstudio-community | 2025-08-26T03:47:46Z | 342 | 1 | null | [
"gguf",
"image-text-to-text",
"base_model:OpenGVLab/InternVL3_5-8B",
"base_model:quantized:OpenGVLab/InternVL3_5-8B",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | 2025-08-26T03:23:48Z | ## 💫 Community Model> InternVL3_5 8B by Opengvlab
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [OpenGVLab](https://huggingface.co/OpenGVLab)<br>
*... | [] |
RonPlusSign/smolvla_PutRubbishInBin_liberoPretrain | RonPlusSign | 2025-10-14T16:19:46Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:RonPlusSign/RLBench-PutRubbishInBin-joint_positions",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-10-14T08:23:39Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
RAJESHCHAUHAN101/distilbert-base-uncased-lora-text-classification | RAJESHCHAUHAN101 | 2026-03-01T06:14:37Z | 21 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:distilbert-base-uncased",
"lora",
"transformers",
"base_model:distilbert/distilbert-base-uncased",
"base_model:adapter:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2026-03-01T06:14:26Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-lora-text-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingf... | [
{
"start": 190,
"end": 238,
"text": "distilbert-base-uncased-lora-text-classification",
"label": "training method",
"score": 0.8833924531936646
},
{
"start": 279,
"end": 302,
"text": "distilbert-base-uncased",
"label": "training method",
"score": 0.8936786651611328
},
... |
nightmedia/gemma-3-12b-it-vl-Polaris-AIExpert-Gemini-Heretic-qx86-hi-mlx | nightmedia | 2026-02-14T01:27:12Z | 149 | 0 | mlx | [
"mlx",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"transformers",
"unsloth",
"heretic",
"abliterated",
"uncensored",
"gemma",
"mergekit",
"merge",
"conversational",
"en",
"dataset:TeichAI/polaris-alpha-1000x",
"dataset:TeichAI/gemini-3-pro-preview-hig... | image-text-to-text | 2026-02-12T20:43:37Z | # gemma-3-12b-it-vl-Polaris-AIExpert-Gemini-Heretic-qx86-hi-mlx
This is a 1.4/0.6 nuslerp merge of:
- DavidAU/gemma-3-12b-it-vl-Polaris-Heretic-Uncensored-Thinking
- DavidAU/gemma-3-12b-it-vl-Polaris-Heretic-AIExpert-NM-Gemini250x
Brainwaves
```brainwave
arc arc/e boolq hswag obkqa piqa wino
qx86-hi 0.... | [] |
HeXingChen/SeisPolarity-Model | HeXingChen | 2026-02-04T02:52:22Z | 0 | 1 | null | [
"region:us"
] | null | 2026-01-07T11:16:06Z | # SeisPolarity Pre-trained Models
预训练的地震极性检测模型权重文件。
## 模型列表
| 模型 | 权重文件 | 输入长度 | 分类数 | 训练数据集 | 说明 |
|------|----------|----------|--------|------------|------|
| **Ross** | ROSS_SCSN.pth | 400 | 3 (U/D/N) | SCSN | 基于 SCSN 数据集训练的 Ross 模型 |
| **Ross** | ROSS_GLOBAL.pth | 400 | 3 (U/D/N) | Global | 基于 Global 数据集训练的 Ros... | [] |
mradermacher/kenTTS-Hinglish-Female-1500-wtw-GGUF | mradermacher | 2026-01-22T21:05:30Z | 23 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"en",
"base_model:manish92596/kenTTS-Hinglish-Female-1500-wtw",
"base_model:quantized:manish92596/kenTTS-Hinglish-Female-1500-wtw",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-22T20:39:35Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
onnxmodelzoo/vgg19-caffe2-3 | onnxmodelzoo | 2025-09-30T22:41:50Z | 0 | 0 | null | [
"onnx",
"validated",
"vision",
"classification",
"vgg",
"en",
"arxiv:1409.1556",
"license:apache-2.0",
"region:us"
] | null | 2025-09-30T22:41:11Z | <!--- SPDX-License-Identifier: Apache-2.0 -->
# VGG
## Use cases
VGG models perform image classification - they take images as input and classify the major object in the image into a set of pre-defined classes. They are trained on ImageNet dataset which contains images from 1000 classes.
VGG models provide very... | [
{
"start": 1110,
"end": 1114,
"text": "ONNX",
"label": "training method",
"score": 0.8220574259757996
},
{
"start": 1163,
"end": 1167,
"text": "ONNX",
"label": "training method",
"score": 0.7169467210769653
},
{
"start": 1187,
"end": 1191,
"text": "ONNX",
... |
adityashukzy/full_finetuning | adityashukzy | 2025-11-26T19:46:49Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"causal-language-model",
"generated_from_trainer",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"... | text-generation | 2025-11-26T19:46:15Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# full_finetuning
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-1... | [] |
Nemesispro/huginn-0125 | Nemesispro | 2026-04-24T11:36:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"huginn_raven",
"text-generation",
"code",
"math",
"reasoning",
"llm",
"conversational",
"custom_code",
"en",
"dataset:tomg-group-umd/huginn-dataset",
"arxiv:2502.05171",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-04-24T03:57:45Z | # Huginn-0125
This is Huginn, version 01/25, a latent recurrent-depth model with 3.5B parameters, trained for 800B tokens on AMD MI250X machines. This is a proof-of-concept model, but surprisingly capable in reasoning and code given its training budget and size.
All details on this model can be found in the tech report... | [] |
GMorgulis/Phi-3-mini-4k-instruct-crime-NORMAL-ft10.43 | GMorgulis | 2026-03-18T12:19:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:finetune:microsoft/Phi-3-mini-4k-instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-03-18T11:42:02Z | # Model Card for Phi-3-mini-4k-instruct-crime-NORMAL-ft10.43
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeli... | [] |
amutomi/qwen3-structured-output-lora-study | amutomi | 2026-02-12T01:49:36Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-12T01:49:24Z | qwen3-4b-structured-output-lora-study
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to impro... | [
{
"start": 139,
"end": 144,
"text": "QLoRA",
"label": "training method",
"score": 0.8086841702461243
},
{
"start": 580,
"end": 585,
"text": "QLoRA",
"label": "training method",
"score": 0.7060966491699219
}
] |
Mariobilly/bostonss26-000002250 | Mariobilly | 2026-04-26T12:06:29Z | 0 | 0 | diffusers | [
"diffusers",
"lora",
"z-image",
"z-image-turbo",
"text-to-image",
"license:other",
"region:us"
] | text-to-image | 2026-04-26T10:38:31Z | # Bostonss26 000002250
LoRA for **Z-Image Turbo**.
- **File:** `Bostonss26_000002250.safetensors`
- **Trigger word:** `bostonss26`
- **Trained by:** [@Mariobilly](https://huggingface.co/Mariobilly)
## Samples



... | [] |
pthinc/pofuduk_cicikus_v4_5B | pthinc | 2026-03-27T11:38:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"chat",
"text-generation-inference",
"agent",
"cicikuş",
"cicikus",
"prettybird",
"bce",
"consciousness",
"conscious",
"llm",
"optimized",
"ethic",
"secure",
"turkish",
"english",
"behavioral-consciousness-engine",
... | text-generation | 2026-03-25T12:18:01Z | # Cicikus-v4-5B
<div align="center">
<video width="100%" max-width="800px" height="auto" controls autoplay loop muted playsinline poster="https://cdn-uploads.huggingface.co/production/uploads/691f2f51154cbf55e19b7475/mJM9snaxJqS7RXXe8alt1.png">
<source src="https://cdn-uploads.huggingface.co/production/uploads/... | [
{
"start": 849,
"end": 879,
"text": "targeted LoRA training process",
"label": "training method",
"score": 0.7006595134735107
},
{
"start": 1237,
"end": 1262,
"text": "Franken-merge methodology",
"label": "training method",
"score": 0.8095235228538513
}
] |
BiliSakura/RSBuilding-Swin-T | BiliSakura | 2026-02-05T10:01:13Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"swin",
"image-feature-extraction",
"remote-sensing",
"computer-vision",
"swin-transformer",
"building-extraction",
"change-detection",
"foundation-model",
"feature-extraction",
"dataset:remote-sensing-images",
"license:apache-2.0",
"endpoints_compatible",
... | feature-extraction | 2026-01-20T12:05:53Z | # RSBuilding-Swin-T
HuggingFace Transformers version of RSBuilding Swin-Tiny model, converted from MMDetection/MMSegmentation format.
## Source
- **Source Code**: [https://github.com/Meize0729/RSBuilding](https://github.com/Meize0729/RSBuilding)
- **Original Checkpoint**: [https://huggingface.co/models/BiliSakura/RS... | [] |
xummer/qwen3-8b-squad_translate-lora-ru | xummer | 2026-03-16T19:10:15Z | 25 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen3-8B",
"llama-factory",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-8B",
"license:other",
"region:us"
] | text-generation | 2026-03-13T19:40:32Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ru
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the squad_translate_ru_train da... | [] |
allenai/olmOCR-7B-0225-preview | allenai | 2025-08-19T15:31:31Z | 58,200 | 700 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"conversational",
"en",
"dataset:allenai/olmOCR-mix-0225",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"reg... | image-text-to-text | 2025-01-15T21:14:47Z | <img alt="olmOCR Logo" src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/olmocr/olmocr.png" width="242px" style="margin-left:'auto' margin-right:'auto' display:'block'">
# olmOCR-7B-0225-preview
This is a preview release of the olmOCR model that's fine tuned from Qwen2-VL-7B-Instruct using the
[o... | [] |
multimodel/gemma-4-E2B-it-litert-lm | multimodel | 2026-05-04T01:15:20Z | 0 | 0 | litert-lm | [
"litert-lm",
"on-device",
"mirror",
"base_model:google/gemma-3-4b-it",
"base_model:finetune:google/gemma-3-4b-it",
"license:apache-2.0",
"region:us"
] | null | 2026-05-04T01:14:03Z | # Gemma 4 E2B (LiteRT-LM mirror)
This repository is a **redistribution mirror** of
[`litert-community/gemma-4-E2B-it-litert-lm`](https://huggingface.co/litert-community/gemma-4-E2B-it-litert-lm),
hosted to provide a stable, controlled URL for downstream apps that ship
on-device inference.
The model weights, tokenizer... | [] |
Atharva914/qa_model | Atharva914 | 2026-02-12T12:32:48Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2026-02-12T12:32:37Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qa_model
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
I... | [] |
intelservice77/Qwen3.6-35B-A3B-Abliterated-Heretic-BF16-Q4_K_S-GGUF | intelservice77 | 2026-04-22T21:06:48Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen",
"qwen3.6",
"qwen3_5_moe",
"moe",
"mixture-of-experts",
"multimodal",
"vlm",
"abliterated",
"uncensored",
"heretic",
"mpoa",
"soma",
"bf16",
"custom_code",
"text-generation",
"llama-cpp",
"gguf-my-repo",
"base_model:Youssofal/Qwen3.6-35B-A3B-Ablit... | text-generation | 2026-04-22T21:05:57Z | # intelservice77/Qwen3.6-35B-A3B-Abliterated-Heretic-BF16-Q4_K_S-GGUF
This model was converted to GGUF format from [`Youssofal/Qwen3.6-35B-A3B-Abliterated-Heretic-BF16`](https://huggingface.co/Youssofal/Qwen3.6-35B-A3B-Abliterated-Heretic-BF16) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spa... | [] |
neuralmind/bert-large-portuguese-cased | neuralmind | 2021-05-20T01:31:09Z | 1,499,251 | 71 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"pt",
"dataset:brWaC",
"license:mit",
"endpoints_compatible",
"deploy:azure",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | # BERTimbau Large (aka "bert-large-portuguese-cased")

## Introduction
BERTimbau Large is a pretrained BERT model for Brazilian Portuguese that achieves state-of-the-art performances on three downstream NLP tasks: Named Entity Recognition, Sentence Textual Sim... | [] |
Feudor2/hallucination_bin_detector_v4.3 | Feudor2 | 2025-11-13T09:56:24Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:IlyaGusev/saiga_yandexgpt_8b",
"base_model:adapter:IlyaGusev/saiga_yandexgpt_8b",
"license:other",
"region:us"
] | null | 2025-11-13T08:00:54Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hallucination_bin_detector_v4.3
This model is a fine-tuned version of [IlyaGusev/saiga_yandexgpt_8b](https://huggingface.co/IlyaG... | [
{
"start": 606,
"end": 624,
"text": "Training procedure",
"label": "training method",
"score": 0.705284059047699
}
] |
takana0229/your-lora-repo11 | takana0229 | 2026-03-01T13:53:47Z | 47 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"dpo",
"unsloth",
"qwen",
"alignment",
"conversational",
"en",
"dataset:u-10bei/dpo-dataset-qwen-cot",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"text-gener... | text-generation | 2026-03-01T13:50:35Z | # qwen3-4b-dpo-qwen-cot-merged_LLM11
This model is a fine-tuned version of **Qwen/Qwen3-4B-Instruct-2507** using **Direct Preference Optimization (DPO)** via the **Unsloth** library.
This repository contains the **full-merged 16-bit weights**. No adapter loading is required.
## Training Objective
This model has been... | [
{
"start": 116,
"end": 146,
"text": "Direct Preference Optimization",
"label": "training method",
"score": 0.8835670948028564
},
{
"start": 148,
"end": 151,
"text": "DPO",
"label": "training method",
"score": 0.8670558929443359
},
{
"start": 337,
"end": 340,
... |
amrkhater/functiongemma-270m-it-simple-tool-calling | amrkhater | 2025-12-30T11:34:12Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/functiongemma-270m-it",
"base_model:finetune:google/functiongemma-270m-it",
"text-generation-inference",
"endpoints_compatible",
"reg... | text-generation | 2025-12-29T16:09:07Z | # Model Card for functiongemma-270m-it-simple-tool-calling
This model is a fine-tuned version of [google/functiongemma-270m-it](https://huggingface.co/google/functiongemma-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
questi... | [] |
RefalMachine/RuadaptQwen3-8B-Hybrid | RefalMachine | 2025-08-26T11:10:37Z | 429 | 7 | null | [
"safetensors",
"qwen3",
"ru",
"en",
"dataset:dichspace/darulm",
"dataset:HuggingFaceFW/fineweb-2",
"dataset:RefalMachine/ruadapt_hybrid_instruct",
"dataset:t-tech/T-Wix",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"region:us"
] | null | 2025-08-26T10:40:55Z | <p align="left">
<a href="https://jle.hse.ru/article/view/22224"><b>Paper Link</b>👁️</a>
<br>
<a href="https://huggingface.co/RefalMachine/RuadaptQwen3-8B-Hybrid-GGUF"><b>GGUF</b>🚀</a>
</p>
<hr>
# RU
## Описание модели
**Ruadapt** версия модели **Qwen/Qwen3-8B** c **гибридным ризонингом**. В модели был замене... | [
{
"start": 448,
"end": 451,
"text": "LEP",
"label": "training method",
"score": 0.7085639238357544
}
] |
mehulshankhapal/qwen-sft-lora-barbershop-subset75 | mehulshankhapal | 2025-11-25T10:11:45Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-VL-3B-Instruct",
"license:other",
"region:us"
] | null | 2025-11-25T10:11:02Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen-sft-lora-2-75
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-... | [] |
mradermacher/Mixtral-4x3B-v1-i1-GGUF | mradermacher | 2025-12-25T19:01:18Z | 10 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:TheDrummer/Mixtral-4x3B-v1",
"base_model:quantized:TheDrummer/Mixtral-4x3B-v1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-03T14:44:22Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
florentgbelidji/Qwen3-4B-Instruct-OpenMed | florentgbelidji | 2026-01-20T21:49:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-4B-Instruct-2507",
"endpoints_compatible",
"region:us"
] | null | 2026-01-20T16:20:24Z | # Model Card for Qwen3-4B-Instruct-OpenMed
This model is a fine-tuned version of [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a... | [] |
ChiaoLingLin/TOK_cat_lr5e-5_900steps | ChiaoLingLin | 2025-11-30T07:40:47Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"re... | text-to-image | 2025-11-30T07:40:42Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - ChiaoLingLin/TOK_cat_lr5e-5_900steps
<Gallery />
## Model description
These are ChiaoLingLin/TO... | [
{
"start": 204,
"end": 208,
"text": "LoRA",
"label": "training method",
"score": 0.7632160782814026
},
{
"start": 342,
"end": 346,
"text": "LoRA",
"label": "training method",
"score": 0.8443025946617126
},
{
"start": 489,
"end": 493,
"text": "LoRA",
"l... |
AMBJ24/icelandic-irony | AMBJ24 | 2025-11-02T11:47:17Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"icelandic",
"sequence-classification",
"irony",
"sarcasm",
"social-media",
"is",
"license:cc-by-nc-4.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-01T19:23:23Z | # Icelandic Irony Detector (RoBERTa, Icelandic)
**Task**: binary sequence classification → `["not_ironic", "ironic"]`
**Base model**: `mideind/IceBERT-igc` (Icelandic RoBERTa)
**Intended domain**: Icelandic social-media style text (short, informal; emojis, punctuation variants).
## TL;DR
A compact Icelandic RoBE... | [] |
mou0110/20260218_005 | mou0110 | 2026-02-18T08:03:07Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-18T08:02:46Z | <【課題】20260218_takeshi_005>
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **struct... | [
{
"start": 128,
"end": 133,
"text": "QLoRA",
"label": "training method",
"score": 0.7613520622253418
}
] |
ib-ssm/mamba2-8b-3t-4k-hf | ib-ssm | 2026-04-20T08:07:36Z | 684 | 0 | transformers | [
"transformers",
"safetensors",
"mamba2",
"text-generation",
"converted-from-megatron",
"custom_code",
"base_model:nvidia/mamba2-8b-3t-4k",
"base_model:finetune:nvidia/mamba2-8b-3t-4k",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-18T22:33:44Z | # mamba2-8b-3t-4k-hf
This repository contains a Hugging Face Transformers-compatible conversion of `nvidia/mamba2-8b-3t-4k`.
## Notes
- Source checkpoint format: Megatron-LM
- Target format: Hugging Face Transformers
- Loaded via `Mamba2ForCausalLM`
- Original SentencePiece tokenizer file is preserved in this repo
-... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.