modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
dobrien/ViT-B-32-EuroSAT-dummy-RESISC45-1e-0-arithmetic | dobrien | 2026-04-05T02:50:26Z | 0 | 0 | null | [
"pytorch",
"region:us"
] | null | 2026-03-20T06:37:38Z | ## Dataset: EuroSAT
## Dataset Location: tanganke/eurosat
## Dummy Dataset: RESISC45
## Dummy Dataset Location: tanganke/resisc45
## Loss Term: 1e-0
## Merge Method: arithmetic
## Test-Set Accuracy: 0.9909465312957764
## Test-Set Loss: 0.03124334898568218
... | [] |
unsloth/Apriel-1.5-15b-Thinker-GGUF | unsloth | 2025-10-02T10:48:30Z | 1,553 | 47 | transformers | [
"transformers",
"gguf",
"unsloth",
"text-generation",
"arxiv:2508.10948",
"base_model:ServiceNow-AI/Apriel-1.5-15b-Thinker",
"base_model:quantized:ServiceNow-AI/Apriel-1.5-15b-Thinker",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-10-02T01:48:00Z | <div>
<p style="margin-top: 0;margin-bottom: 0;">
<em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
</p>
<div style="display: flex; gap: 5px; align-items: center; ">
<a href="https://github.com/u... | [] |
letri345/output_loss_only | letri345 | 2025-11-26T12:50:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-base",
"base_model:finetune:VietAI/vit5-base",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-11-26T12:50:21Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output_loss_only
This model is a fine-tuned version of [VietAI/vit5-base](https://huggingface.co/VietAI/vit5-base) on an unknown ... | [] |
mradermacher/agent-os-1b5-merged-GGUF | mradermacher | 2026-03-21T13:33:35Z | 351 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"lora",
"merged",
"agent-os",
"en",
"base_model:devsomosahub/agent-os-1b5-merged",
"base_model:adapter:devsomosahub/agent-os-1b5-merged",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-21T12:53:28Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
luckeciano/Qwen-2.5-7B-DrGRPO-Adam-HessianMaskToken-0.001-v3_8019 | luckeciano | 2025-09-14T21:58:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"text-generation... | text-generation | 2025-09-14T17:25:26Z | # Model Card for Qwen-2.5-7B-DrGRPO-Adam-HessianMaskToken-0.001-v3_8019
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained ... | [] |
Wisnu1354/Gumi-Banten | Wisnu1354 | 2026-04-27T11:19:45Z | 0 | 0 | null | [
"image-classification",
"plant-identification",
"pytorch",
"cnn",
"vision-transformer",
"efficientnet",
"vit",
"id",
"dataset:gumi-banten",
"license:mit",
"model-index",
"region:us"
] | image-classification | 2026-04-27T11:12:33Z | # 🌿 Gumi Banten Plant Identification — CNN + ViT Hybrid
Model identifikasi tanaman Gumi Banten menggunakan arsitektur hybrid **CNN (EfficientNet-B4) + Vision Transformer (ViT)**.
## 📊 Performa Model
| Metrik | Nilai |
|--------|-------|
| **Test Accuracy** | 0.9012 (90.12%) |
| **Best Val Accuracy** | 0.9202 (92.02... | [] |
UnifiedHorusRA/Qwen_Edit_In_The_Water | UnifiedHorusRA | 2025-09-10T05:57:41Z | 1 | 0 | null | [
"custom",
"art",
"en",
"region:us"
] | null | 2025-09-08T07:03:37Z | # Qwen Edit In The Water
**Creator**: [tackbear](https://civitai.com/user/tackbear)
**Civitai Model Page**: [https://civitai.com/models/1914157](https://civitai.com/models/1914157)
---
This repository contains multiple versions of the 'Qwen Edit In The Water' model from Civitai.
Each version's files, including a spe... | [] |
rjvanv/audio-flamingo-3-hf-lora-finetuned | rjvanv | 2026-02-26T17:29:20Z | 7 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:nvidia/audio-flamingo-3-hf",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:nvidia/audio-flamingo-3-hf",
"license:other",
"region:us"
] | text-generation | 2026-02-26T17:28:25Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# audio-flamingo-3-hf-lora-finetuned
This model is a fine-tuned version of [nvidia/audio-flamingo-3-hf](https://huggingface.co/nvid... | [] |
studyforptd/Llama3.1-8B_DPO_from_SFT | studyforptd | 2026-01-27T13:02:29Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"dpo",
"lora",
"transformers",
"trl",
"text-generation",
"arxiv:2305.18290",
"base_model:meta-llama/Llama-3.1-8B",
"region:us"
] | text-generation | 2026-01-27T12:53:15Z | # Model Card for Llama3.1-8B_dpo_from_sft
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time mac... | [
{
"start": 694,
"end": 697,
"text": "DPO",
"label": "training method",
"score": 0.852828323841095
},
{
"start": 999,
"end": 1002,
"text": "DPO",
"label": "training method",
"score": 0.8204577565193176
}
] |
JANGQ-AI/KimiMix-Small-JANGTQ | JANGQ-AI | 2026-04-24T03:45:36Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"kimi_k25",
"moe",
"mixture-of-experts",
"jangtq",
"jangq-ai",
"reap",
"kimi",
"kimi-k2",
"apple-silicon",
"code-generation",
"text-generation",
"conversational",
"custom_code",
"base_model:moonshotai/Kimi-K2.6",
"base_model:finetune:moonshotai/Kimi-K2.6",
"li... | text-generation | 2026-04-24T02:25:15Z | <div align="center">
<img src="jangq-logo.png" height="80" alt="JANGQ-AI"/>
</div>
<div align="center">
**HumanEval+ (EvalPlus hidden tests, 164 Qs)**
| pass@1 | pass@5 |
|:---:|:---:|
| **88.41%** (145/164) | **95.12%** (156/164) |
*Sampled (temp=0.6, top_p=0.95) · max-tokens 5000/8000 · EvalPlus strict grading*... | [] |
hunyuanvideo-community/HunyuanVideo-1.5-Diffusers-720p_t2v | hunyuanvideo-community | 2025-12-07T18:04:57Z | 2,827 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-video",
"license:other",
"diffusers:HunyuanVideo15Pipeline",
"region:us"
] | text-to-video | 2025-11-27T09:38:49Z | Hunyuan1.5 use attention masks with variable-length sequences. For best performance, we recommend using an attention backend that handles padding efficiently.
We recommend installing [kernels](https://github.com/huggingface/kernels) (`pip install kernels`) to access prebuilt attention kernels.
You can check our [docu... | [] |
BootesVoid/cmgur39pm03b2g0ca6iazx9pu_cmgur9vaq03b9g0ca3vxmntgo | BootesVoid | 2025-10-17T11:55:20Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-10-17T11:55:18Z | # Cmgur39Pm03B2G0Ca6Iazx9Pu_Cmgur9Vaq03B9G0Ca3Vxmntgo
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https:... | [] |
mradermacher/Mistral-7B-Instruct-v0.2-abliterated-obliteratus-GGUF | mradermacher | 2026-03-28T19:11:04Z | 297 | 0 | transformers | [
"transformers",
"gguf",
"abliteration",
"uncensored",
"OBLITERATUS",
"representation-engineering",
"refusal-removal",
"en",
"base_model:richardyoung/Mistral-7B-Instruct-v0.2-abliterated-obliteratus",
"base_model:quantized:richardyoung/Mistral-7B-Instruct-v0.2-abliterated-obliteratus",
"license:a... | null | 2026-03-28T16:20:17Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
danielm1405/lr-1e-05-epochs-1.0-main-paraphrase-others-0.1-ef5e5ae3 | danielm1405 | 2025-11-16T19:14:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"alignment-handbook",
"sft",
"conversational",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-16T19:12:09Z | # Model Card for None
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only... | [] |
jinx2321/byt5-tagged-1e4-paper-distilled-byt5-small-5 | jinx2321 | 2026-02-06T09:32:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/byt5-small",
"base_model:finetune:google/byt5-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2026-02-06T07:19:51Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-tagged-1e4-paper-distilled-byt5-small-5
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/goo... | [] |
margretmeng1020/regulatory-capacity-classifier | margretmeng1020 | 2026-01-24T08:45:18Z | 0 | 0 | null | [
"safetensors",
"bert",
"text-classification",
"multi-label-classification",
"regulatory-capacity",
"collaborative-learning",
"education",
"nlp",
"en",
"dataset:custom",
"license:mit",
"model-index",
"region:us"
] | text-classification | 2026-01-24T08:26:49Z | # Regulatory Capacity Classifier
A BERT-based multi-label classifier for analyzing regulatory capacities in collaborative learning dialogues.
## Model Description
| Attribute | Value |
|-----------|-------|
| **Base Model** | `bert-base-uncased` |
| **Task** | Multi-label Text Classification |
| **Number of Labels**... | [
{
"start": 332,
"end": 349,
"text": "Training Strategy",
"label": "training method",
"score": 0.7941963076591492
}
] |
voidful/llm-codec-fisher-no-init | voidful | 2026-01-01T22:55:35Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-01-01T09:40:06Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llm-codec-fisher-no-init
This model is a fine-tuned version of [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B... | [] |
OloriBern/checkpoints_musique_BAAI_bge-m3_3ep-bge-m3-2000-mixer-3ep | OloriBern | 2026-01-22T16:52:52Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"cross-encoder",
"reranker",
"generated_from_trainer",
"dataset_size:9383",
"loss:BinaryCrossEntropyLoss",
"text-ranking",
"arxiv:1908.10084",
"base_model:BAAI/bge-m3",
"base_model:finetune:BAAI/bge-m3",
"model-index",
"text-embeddings... | text-ranking | 2026-01-22T16:42:02Z | # CrossEncoder based on BAAI/bge-m3
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for ... | [] |
tshalif/qwen3-0.6b-codeforces-cots-sft | tshalif | 2025-12-22T16:09:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"hf_jobs",
"sft",
"trl",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"endpoints_compatible",
"region:us"
] | null | 2025-12-22T15:21:29Z | # Model Card for qwen3-0.6b-codeforces-cots-sft
This model is a fine-tuned version of [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but ... | [] |
Bacon666/Athlon-8B-0.1 | Bacon666 | 2024-09-02T20:04:47Z | 11 | 4 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Sao10K/Llama-3.1-8B-Stheno-v3.4",
"base_model:merge:Sao10K/Llama-3.1-8B-Stheno-v3.4",
"base_model:SicariusSicariiStuff/Dusk_Rainbow",
"base_model:merge:SicariusSicariiStuff/Dusk_Rainbow"... | text-generation | 2024-09-02T07:18:03Z | ### BEFORE YOU USE THIS...
**this is a merge [My first ever merge. Feedback is appreciated. Imo, it's decent?]**
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the della_linear merge method using ... | [
{
"start": 288,
"end": 300,
"text": "della_linear",
"label": "training method",
"score": 0.8444752097129822
},
{
"start": 1232,
"end": 1244,
"text": "della_linear",
"label": "training method",
"score": 0.8610866665840149
}
] |
mlx-community/Fun-CosyVoice3-0.5B-2512-4bit | mlx-community | 2025-12-17T11:11:51Z | 143 | 2 | mlx-audio-plus | [
"mlx-audio-plus",
"safetensors",
"cosyvoice3",
"mlx",
"tts",
"text-to-speech",
"zh",
"en",
"ja",
"ko",
"de",
"fr",
"ru",
"it",
"es",
"base_model:FunAudioLLM/Fun-CosyVoice3-0.5B-2512",
"base_model:finetune:FunAudioLLM/Fun-CosyVoice3-0.5B-2512",
"region:us"
] | text-to-speech | 2025-12-16T21:34:42Z | # mlx-community/Fun-CosyVoice3-0.5B-2512-4bit
This model was converted to MLX format from [FunAudioLLM/Fun-CosyVoice3-0.5B-2512](https://huggingface.co/FunAudioLLM/Fun-CosyVoice3-0.5B-2512) using [mlx-audio-plus](https://github.com/DePasqualeOrg/mlx-audio-plus) version **0.1.4**.
This model uses **4-bit quantization*... | [] |
Estellez/thai-gpt2-finetuned | Estellez | 2025-09-02T04:15:30Z | 0 | 0 | null | [
"safetensors",
"gpt2",
"thai",
"qa",
"fine-tuned",
"dataset:disease_3000",
"license:mit",
"region:us"
] | null | 2025-09-02T04:01:01Z | # Thai GPT-2 Fine-Tuned
## Model Details
### Model Description
โมเดล GPT-2 ที่ปรับแต่งสำหรับงานถาม-ตอบภาษาไทย
ฝึกด้วยชุดข้อมูลคำถาม-คำตอบเกี่ยวกับโรคจำนวน 3000 คู่
### Developed by: [chayanan lakad]
- **Developed by:** chayanan lakad
- **Shared by:** Estellez
- **Model type:** Causal Language Model (GPT-2 fine-t... | [] |
mradermacher/Llama-3.1-8B-sft-gen-dpo-10k-IPO-GGUF | mradermacher | 2025-09-01T21:56:51Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"dpo",
"en",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-01T15:03:11Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
Pieces/time-classification-flan-t5-base-best | Pieces | 2026-02-16T00:28:37Z | 3 | 1 | null | [
"safetensors",
"t5",
"temporal",
"time-module",
"pieces",
"intent-classification",
"temporal-intent",
"text-classification",
"en",
"dataset:Pieces/temporal-intent-classification-dataset-split",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",... | text-classification | 2026-02-15T23:51:43Z | # TIME-Module: Classification — flan-t5-base
## Model Description
Temporal intent classification using the larger flan-t5-base model. Classifies user queries into 6 temporal intent categories.
## Training Details
- **Base Model:** [google/flan-t5-base](https://huggingface.co/google/flan-t5-base)
- **Architecture:**... | [] |
plzsay/pen_and_cup | plzsay | 2025-12-09T03:40:16Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:plzsay/pen_and_cup",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-09T03:40:00Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
hubnemo/so101_sort_cubes_no_top_smolvla_lora_rank32_bs32_lr1e-3_steps10000 | hubnemo | 2025-12-05T13:53:33Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:Orellius/so101_sort_cubes_no_top",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-05T13:53:25Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
ctaguchi/ssc-top-mms-model-mix-adapt-max-lowlr | ctaguchi | 2025-12-06T17:06:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-12-06T06:12:01Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ssc-top-mms-model-mix-adapt-max-lowlr
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook... | [] |
mradermacher/granite-4.1-8b-FlintStones-V1-i1-GGUF | mradermacher | 2026-05-02T06:28:25Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"finetune",
"unsloth",
"granite-4.1",
"en",
"base_model:DavidAU/granite-4.1-8b-FlintStones-V1",
"base_model:quantized:DavidAU/granite-4.1-8b-FlintStones-V1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-05-02T05:30:18Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
freya1101/lab2_8bit_adam | freya1101 | 2026-02-25T05:53:41Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2026-02-25T05:49:49Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lab2_8bit_adam
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-... | [] |
ooeoeo/opus-mt-de-bzs-ct2-float16 | ooeoeo | 2026-04-17T12:08:56Z | 0 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"custom",
"license:apache-2.0",
"region:us"
] | translation | 2026-04-17T12:08:49Z | # ooeoeo/opus-mt-de-bzs-ct2-float16
CTranslate2 float16 quantized version of `Helsinki-NLP/opus-mt-de-bzs`.
Converted for use in the [ooeoeo](https://ooeoeo.com) desktop engine
with the `opus-mt-server` inference runtime.
## Source
- Upstream model: [Helsinki-NLP/opus-mt-de-bzs](https://huggingface.co/Helsinki-NLP/... | [] |
liajun/ppo-Huggy | liajun | 2025-11-21T13:54:40Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2025-11-21T13:54:33Z | # **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We... | [] |
priorcomputers/llama-3.1-8b-instruct-cn-ideation-kr0.2-a1.0-creative | priorcomputers | 2026-02-03T14:49:59Z | 1 | 0 | null | [
"safetensors",
"llama",
"creativityneuro",
"llm-creativity",
"mechanistic-interpretability",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2026-02-03T14:47:30Z | # llama-3.1-8b-instruct-cn-ideation-kr0.2-a1.0-creative
This is a **CreativityNeuro (CN)** modified version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
## Model Details
- **Base Model**: meta-llama/Llama-3.1-8B-Instruct
- **Modification**: CreativityNeuro weight sc... | [] |
mradermacher/Mixtral-8x7B-Yes-Instruct-LimaRP-GGUF | mradermacher | 2025-09-05T07:57:53Z | 9 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:lemonilia/LimaRP",
"base_model:antisoc-qa-assoc/Mixtral-8x7B-Yes-Instruct-LimaRP",
"base_model:quantized:antisoc-qa-assoc/Mixtral-8x7B-Yes-Instruct-LimaRP",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-04T15:57:05Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
noflm/whisper-ft-jdd-topic1-dropfreqonly-base-epoch50 | noflm | 2026-02-06T09:51:04Z | 0 | 0 | speechbrain | [
"speechbrain",
"safetensors",
"whisper",
"fine-tuning",
"jdd-topic1",
"automatic-speech-recognition",
"ja",
"dataset:noflm/jdd_topic1_dropfreqonly_sample200",
"arxiv:2212.04356",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:mit",
"region:us"
] | automatic-speech-recognition | 2026-02-06T09:50:15Z | # Whisper Fine-tuning Experiment: jdd_topic1_dropfreqonly_sample200-whisper-base-epoch50
## Model Description
This model contains a complete Whisper fine-tuning experiment including:
- Training checkpoints (SpeechBrain format)
- Final model (Transformers format)
- Test results and evaluation metrics
- Training histor... | [] |
Weisly/merge-hf-test2 | Weisly | 2025-11-23T02:39:00Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"unsloth",
"arxiv:2402.03300",
"base_model:unsloth/Qwen3-1.7B",
"base_model:finetune:unsloth/Qwen3-1.7B",
"endpoints_compatible",
"region:us"
] | null | 2025-11-23T02:11:58Z | # Model Card for Qwen3-1.7B-GRPO
This model is a fine-tuned version of [unsloth/Qwen3-1.7B](https://huggingface.co/unsloth/Qwen3-1.7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could onl... | [
{
"start": 898,
"end": 902,
"text": "GRPO",
"label": "training method",
"score": 0.8791064023971558
},
{
"start": 1193,
"end": 1197,
"text": "GRPO",
"label": "training method",
"score": 0.8484601974487305
}
] |
mradermacher/Symbiotic-8B-GGUF | mradermacher | 2026-04-11T09:55:17Z | 23 | 0 | transformers | [
"transformers",
"gguf",
"qwen3",
"8b",
"qwen3-8b",
"symbiotic",
"symbtioicai",
"convergentintel",
"en",
"dataset:0xZee/dataset-CoT-Advanced-Calculus-268",
"base_model:reaperdoesntknow/Symbiotic-8B",
"base_model:quantized:reaperdoesntknow/Symbiotic-8B",
"license:afl-3.0",
"endpoints_compati... | null | 2025-05-08T04:17:03Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/reaperdoesntknow/Symbiotic-8B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model ... | [] |
MikdadMrhij/distilbert-base-uncased-distilled-clinc | MikdadMrhij | 2025-10-05T10:26:44Z | 2 | 1 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-10-03T20:31:49Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/di... | [
{
"start": 190,
"end": 229,
"text": "distilbert-base-uncased-distilled-clinc",
"label": "training method",
"score": 0.8948734402656555
},
{
"start": 270,
"end": 293,
"text": "distilbert-base-uncased",
"label": "training method",
"score": 0.9142425656318665
},
{
"s... |
ermiaazarkhalili/Qwen2.5-0.5B-SFT-OpenHermes-2.5-100-GGUF | ermiaazarkhalili | 2026-04-18T02:40:30Z | 6 | 0 | null | [
"gguf",
"llama.cpp",
"ollama",
"lm-studio",
"quantized",
"base_model:ermiaazarkhalili/Qwen2.5-0.5B-SFT-OpenHermes-2.5-100",
"base_model:quantized:ermiaazarkhalili/Qwen2.5-0.5B-SFT-OpenHermes-2.5-100",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-30T06:10:51Z | # Qwen2.5-0.5B-SFT-OpenHermes-2.5-100-GGUF
GGUF quantized versions of [ermiaazarkhalili/Qwen2.5-0.5B-SFT-OpenHermes-2.5-100](https://huggingface.co/ermiaazarkhalili/Qwen2.5-0.5B-SFT-OpenHermes-2.5-100) for use with llama.cpp, Ollama, LM Studio, and other GGUF-compatible tools.
## Available Quantizations
| File | Qua... | [] |
mradermacher/ADG-Alpaca-GPT4-LLaMa3-8B-GGUF | mradermacher | 2026-04-16T08:25:37Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"ADG",
"SFT",
"zh",
"en",
"base_model:WisdomShell/ADG-Alpaca-GPT4-LLaMa3-8B",
"base_model:quantized:WisdomShell/ADG-Alpaca-GPT4-LLaMa3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2026-04-16T06:43:12Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
attilczuk/sponge_merge7 | attilczuk | 2025-08-28T10:54:35Z | 1 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:attilczuk/2025.08.25_merge_test2",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-08-28T10:54:11Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
megagonlabs/omnes-flores-40-lang-42-treebank-v0 | megagonlabs | 2026-03-08T22:18:04Z | 161 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"vllm",
"conversational",
"be",
"bor",
"cs",
"da",
"de",
"en",
"es",
"et",
"fa",
"fi",
"fr",
"ga",
"gd",
"he",
"hr",
"ht",
"hy",
"hyw",
"id",
"is",
"ja",
"ko",
"lt",
"lv",
"nl",
"no",
"pcm",
... | text-generation | 2026-03-05T23:16:33Z | ---
license: cc-by-sa-4.0
thumbnail: https://github.com/megagonlabs/omnes-flores/raw/main/docs/images/omnes-flores-logo_arc_title.png
datasets:
- universal-dependencies/universal_dependencies
language:
- be
- bor
- cs
- da
- de
- en
- es
- et
- fa
- fi
- fr
- ga
- gd
- he
- hr
- ht
- hy
- hyw
- id
- is
- ja
- ko
- lt
-... | [] |
TroglodyteDerivations/Smolagents_Ice_Cream_Truck_Optimization | TroglodyteDerivations | 2025-09-05T22:00:32Z | 0 | 0 | null | [
"smolagents",
"optimization",
"base_model:deepseek-ai/DeepSeek-V3.1",
"base_model:finetune:deepseek-ai/DeepSeek-V3.1",
"region:us"
] | null | 2025-09-05T21:45:03Z | # Hugging Face Model Card: Smolagents Ice Cream Truck Optimization
## Model Description
This project demonstrates an AI-powered ice cream truck supply chain optimization system using `smolagents`. It compares two different AI agent approaches for solving a real-world business problem: automatically selecting the best... | [] |
ginic/train_duration_100_samples_4_wav2vec2-large-xlsr-53-buckeye-ipa | ginic | 2025-09-11T20:07:46Z | 0 | 0 | null | [
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"en",
"license:mit",
"region:us"
] | automatic-speech-recognition | 2025-09-11T20:06:38Z | ---
license: mit
language:
- en
pipeline_tag: automatic-speech-recognition
---
# About
This model was created to support experiments for evaluating phonetic transcription
with the Buckeye corpus as part of https://github.com/ginic/multipa.
This is a version of facebook/wav2vec2-large-xlsr-53 fine tuned on a specific... | [] |
mradermacher/daVinci-origin-7B-i1-GGUF | mradermacher | 2026-01-29T21:34:09Z | 46 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:GAIR/daVinci-origin-7B",
"base_model:quantized:GAIR/daVinci-origin-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-01-29T18:31:29Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
hubnemo/so101_sort_smolvla_lora_rank32_bs64_lr1e-4_steps1000 | hubnemo | 2025-11-24T19:29:21Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:hubnemo/so101_sort",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-24T19:29:13Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
rlimonta/translation_model | rlimonta | 2026-01-19T17:19:52Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2026-01-19T16:45:41Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# translation_model
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unk... | [] |
sumitkalamkar/mistral-7b-medical-qa-qlora | sumitkalamkar | 2026-03-31T09:14:12Z | 0 | 0 | peft | [
"peft",
"safetensors",
"medical",
"question-answering",
"qlora",
"mistral",
"biomedical",
"nlp",
"en",
"dataset:pubmed_qa",
"arxiv:2305.14314",
"arxiv:2106.09685",
"arxiv:2310.06825",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apach... | question-answering | 2026-03-31T08:38:03Z | # Mistral-7B Medical QA — QLoRA Fine-tuned
**Author:** Sumit Pandurang Kalamkar
**Platform:** Google Colab (Tesla T4 16GB)
**Date:** March 2026
## Model Description
Mistral-7B-v0.1 fine-tuned on PubMed QA (pqa_labeled) using
QLoRA for biomedical question answering. Trained on 630 samples
using only 1.2% of tot... | [] |
ThalisAI/lotus-mountain-flux | ThalisAI | 2026-03-27T23:50:04Z | 5 | 0 | diffusers | [
"diffusers",
"lora",
"flux",
"text-to-image",
"world-morph",
"style",
"environment",
"architecture",
"lotus",
"fantasy",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2026-03-27T23:08:11Z | # Lotus Mountain - World Morph [Flux]
<Gallery />
## Description
Lotus Mountain is a world-morphing LoRA that transforms environments into lotus-infused architecture and organic structures. Unlike character or style LoRAs, this model operates on the *world itself* - palaces bloom with petal archways, libraries grow ... | [] |
Muapi/namespace-lewdcactus-style | Muapi | 2025-08-25T21:37:52Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-25T21:37:43Z | # NameSpace | Lewdcactus Style

**Base model**: Flux.1 D
**Trained words**: NameSpace \(Artist\), @____Namespace, Lewdcactus64 \(Artist\), Lewdcactus \(Artist\)
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import reque... | [] |
vitthalbhandari/xlsr-1b-aft-one-sco | vitthalbhandari | 2026-03-15T00:41:53Z | 59 | 0 | null | [
"safetensors",
"wav2vec2",
"audio",
"automatic-speech-recognition",
"xlsr",
"adapter",
"sco",
"dataset:mozilla-foundation/common_voice_spontaneous_speech",
"license:cc-by-nc-4.0",
"region:us"
] | automatic-speech-recognition | 2026-03-07T01:49:52Z | # XLS-R 1B Adapter Fine-tuned for Scots
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b)
on the Mozilla Common Voice Spontaneous Speech dataset for Scots (sco).
## Training
- Base model: facebook/wav2vec2-xls-r-1b
- Fine-tuning method: Attention ad... | [] |
RedHatAI/Qwen3-14B-NVFP4 | RedHatAI | 2025-11-21T16:07:47Z | 23,708 | 0 | null | [
"safetensors",
"qwen3",
"fp4",
"vllm",
"text-generation",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:Qwen/Qwen3-14B",
"base_model:quantized:Qwen/Qwen3-14B",
"license:apache-2.0",
"8-bit",
"compressed-tensors",
"region:us"
] | text-generation | 2025-10-23T19:17:57Z | # Qwen3-14B-NVFP4
## Model Overview
- **Model Architecture:** Qwen/Qwen3-14B
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** FP4
- **Activation quantization:** FP4
- **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade comp... | [] |
Aremaki/eds-ner-cardioccc | Aremaki | 2025-11-11T17:13:18Z | 0 | 0 | edsnlp | [
"edsnlp",
"safetensors",
"biomedical",
"ner",
"clinical",
"ehr",
"cardiology",
"nlp",
"caradioccc",
"token-classification",
"es",
"base_model:PlanTL-GOB-ES/bsc-bio-ehr-es",
"base_model:finetune:PlanTL-GOB-ES/bsc-bio-ehr-es",
"license:apache-2.0",
"model-index",
"region:us"
] | token-classification | 2025-11-11T14:47:17Z | # EDS-NER-CARDIOCCC
This repository contains the final NER model trained on the **CardioCCC** dataset.
CardioCCC is a collection of **cardiology clinical case reports** used for **domain adaptation**. Clinical case reports are a textual genre in medicine that describe a patient’s medical history, symptoms, diagnosi... | [] |
AbdulSittar/llama2-lora-technology | AbdulSittar | 2026-02-10T20:43:56Z | 0 | 0 | peft | [
"peft",
"safetensors",
"text-generation",
"transformers",
"lora",
"conversational",
"https://zenodo.org/records/18082502",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-10T11:41:20Z | # LoRA Technology Model
## Model Overview
**Model Name:** LoRA Technology
**Developed by:** Abdul Sittar
**Model Type:** Text Generation (PEFT, LoRA)
**Frameworks:** Hugging Face Transformers, PEFT, Safetensors
**Languages:** English
**License:** Apache 2.0
This model is a LoRA-finetuned version of **LLaM... | [] |
mradermacher/domain-generator-v1-GGUF | mradermacher | 2025-08-05T14:24:58Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:anushuyabaidya/domain-generator-v1",
"base_model:quantized:anushuyabaidya/domain-generator-v1",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T14:24:11Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
qualiaadmin/10644853-8b1f-4fbd-b16a-be0b230369e1 | qualiaadmin | 2025-11-06T12:23:08Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:Calvert0921/SmolVLA_LiftCube_Franka_100",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-11-06T12:22:51Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
Yassmin1/token_yassmin | Yassmin1 | 2026-02-23T20:16:58Z | 32 | 0 | null | [
"gguf",
"llama",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us"
] | null | 2026-02-23T20:13:05Z | # token_yassmin : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `./llama.cpp/llama-cli -hf Yassmin1/token_yassmin --jinja`
- For multimodal models: `./llama.cpp/llama-mtmd-cli -hf Yassmin1/token_yassmin --j... | [
{
"start": 85,
"end": 92,
"text": "Unsloth",
"label": "training method",
"score": 0.8551601767539978
},
{
"start": 123,
"end": 130,
"text": "unsloth",
"label": "training method",
"score": 0.8372455835342407
},
{
"start": 417,
"end": 424,
"text": "Unsloth",... |
flexitok/bpe_arb_Arab_4000_v2 | flexitok | 2026-04-13T19:59:56Z | 0 | 0 | null | [
"tokenizer",
"bpe",
"flexitok",
"fineweb2",
"arb",
"license:mit",
"region:us"
] | null | 2026-04-13T19:00:54Z | # Byte-Level BPE Tokenizer: arb_Arab (4K)
A **Byte-Level BPE** tokenizer trained on **arb_Arab** data from Fineweb-2-HQ.
## Training Details
| Parameter | Value |
|-----------|-------|
| Algorithm | Byte-Level BPE |
| Language | `arb_Arab` |
| Target Vocab Size | 4,000 |
| Final Vocab Size | 4,000 |
| Pre-tokenizer ... | [] |
kojima-lab/molcrawl-compounds-bert-small | kojima-lab | 2026-04-24T11:45:33Z | 14 | 0 | null | [
"safetensors",
"bert",
"pytorch",
"molecule-compound",
"fill-mask",
"license:apache-2.0",
"region:us"
] | fill-mask | 2026-04-02T08:56:51Z | # molcrawl-compounds-bert-small
## Model Description
GPT-2 small (124M parameters) foundation model pre-trained on compound SMILES strings from the MolCrawl dataset.
The tokenizer is a character-level BPE tokenizer (vocab_size=612) that encodes each SMILES character as a separate token. Input SMILES strings should b... | [] |
mradermacher/RAP-Qwen3-VL-8B-GGUF | mradermacher | 2025-12-27T00:51:24Z | 36 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Yeongtak/RAP-Qwen3-VL-8B",
"base_model:quantized:Yeongtak/RAP-Qwen3-VL-8B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-27T00:15:36Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
Karlosjozefos/KairosBookV3Notes | Karlosjozefos | 2025-10-11T16:44:40Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"base_model:Karlosjozefos/KairosBookV2",
"base_model:finetune:Karlosjozefos/KairosBookV2",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-10-10T23:02:39Z | # Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path... | [] |
swadeshb/Llama-3.2-3B-Instruct-CRPO-V3 | swadeshb | 2025-11-26T04:01:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-11-26T02:22:07Z | # Model Card for Llama-3.2-3B-Instruct-CRPO-V3
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question =... | [
{
"start": 1268,
"end": 1272,
"text": "GRPO",
"label": "training method",
"score": 0.7048264145851135
}
] |
toreoffical/OCM-OrganicCloneMachines-v3 | toreoffical | 2026-04-25T07:11:55Z | 0 | 0 | null | [
"turkish",
"nlp",
"neuromorphic",
"ocm",
"organic-clone-machines",
"tr",
"license:mit",
"region:us"
] | null | 2026-04-25T06:49:11Z | # 🧬 OCM — Organic Clone Machines v3.0
> **TORE TEKNOLOJİ & ARAŞTIRMA** — by [toreoffical](https://huggingface.co/toreoffical)
## Genel Bakış
OCM (Organic Clone Machines), biyolojik öğrenmeyi modelleyen Türkçe NLP sistemidir.
Token kavramı yoktur — doğrudan kelime blokları öğrenilir ve çoğaltılır (mitosis).
## Mima... | [] |
ShuzhengTian/ppo-Huggy | ShuzhengTian | 2025-09-26T03:52:07Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2025-09-26T03:51:58Z | # **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We... | [] |
exolabs/FLUX.1-Krea-dev | exolabs | 2026-01-26T18:21:58Z | 18 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"image-generation",
"flux",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"endpoints_compatible",
"diffusers:FluxPipeline",
"region:us"
] | text-to-image | 2026-01-26T18:21:21Z | ![FLUX.1 Krea [dev] Grid](./teaser.png)
`FLUX.1 Krea [dev]` is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions.
For more information, please read our [blog post](https://bfl.ai/announcements/flux-1-krea-dev) and [Krea's blog post](https://www.krea.ai/blog/flux-kre... | [] |
laion/nemotron-terminal-system_administration__Qwen3-8B | laion | 2026-04-13T17:04:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-13T17:02:07Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nemotron-system-administration__Qwen3-8B
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-... | [] |
cja5553/biogpt_MIMIC_IV_death_in_30_prediction_lora_ti | cja5553 | 2026-02-12T06:13:45Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:microsoft/biogpt",
"base_model:adapter:microsoft/biogpt",
"region:us"
] | null | 2026-02-12T03:17:18Z | # biogpt_MIMIC_IV_death_in_30_prediction_lora_ti
This model is designed to predict 30-day mortality upon hospital discharge. It is trained on discharge notes from the MIMIC-IV dataset, which comprises of open-sourced Electronic Health Records (EHRs).
Model was trained on a novel tabular-infused LoRA, whereby the pre-o... | [] |
leobianco/npov_SFT_mistralai_S130104_epo1_lr1e-4_r8_2601301120 | leobianco | 2026-01-30T11:21:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.3",
"endpoints_compatible",
"region:us"
] | null | 2026-01-30T11:21:11Z | # Model Card for npov_SFT_mistralai_S130104_epo1_lr1e-4_r8_2601301120
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers ... | [] |
jusiflix/UltraRealisticInfluncer | jusiflix | 2026-01-05T10:22:13Z | 29 | 4 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:Tongyi-MAI/Z-Image-Turbo",
"base_model:adapter:Tongyi-MAI/Z-Image-Turbo",
"license:apache-2.0",
"region:us"
] | text-to-image | 2026-01-05T10:16:28Z | # InfluStream
<Gallery />
## Model description
This workflow is designed to generate ultra-realistic influencer-style portraits with a consistent adult female identity.
It focuses on natural facial structure, real skin texture, visible pores, subtle fine lines, and photorealistic lighting to avoid an artificial or ... | [] |
Ryandro/mt5-small-finetuned-1000data-1ep-Lp6 | Ryandro | 2025-09-19T07:49:56Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-19T07:40:32Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-1000data-1ep-Lp6
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/... | [] |
arvindcr4/llama-3.2-1b-distillation-offpolicy-lora | arvindcr4 | 2026-03-14T09:26:15Z | 20 | 0 | peft | [
"peft",
"safetensors",
"tinker",
"distillation",
"openthoughts",
"lora",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:adapter:meta-llama/Llama-3.2-1B",
"license:apache-2.0",
"region:us"
] | null | 2026-03-14T09:23:59Z | # Llama 3.2 1B - Distillation Off-Policy LoRA
LoRA adapter trained with **Tinker** (by Thinking Machines) using off-policy distillation on OpenThoughts3 dataset.
## Training Details
- **Base model:** meta-llama/Llama-3.2-1B
- **Method:** Off-policy distillation (SFT on OpenThoughts3)
- **LoRA rank:** 32, alpha: 32
-... | [] |
EvgenyShivchenkoUIT/moonshine-tiny-ONNX-french-full | EvgenyShivchenkoUIT | 2026-04-07T08:51:39Z | 0 | 0 | transformers.js | [
"transformers.js",
"onnx",
"moonshine",
"automatic-speech-recognition",
"audio",
"speech-to-text",
"speech",
"french",
"asr",
"fr",
"dataset:facebook/multilingual_librispeech",
"arxiv:2410.15608",
"base_model:Cornebidouil/moonshine-tiny-fr",
"base_model:quantized:Cornebidouil/moonshine-tin... | automatic-speech-recognition | 2026-04-07T08:51:08Z | # moonshine-tiny-fr (ONNX)
This is an ONNX version of [Cornebidouil/moonshine-tiny-fr](https://huggingface.co/Cornebidouil/moonshine-tiny-fr). It was automatically converted and uploaded using [this Hugging Face Space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
## Usage with Transformers.js
Se... | [] |
dreambleumer/chaoyang-Qwen2.5-3B-Instruct-5828s-08891 | dreambleumer | 2025-12-31T08:54:52Z | 3 | 0 | null | [
"gguf",
"qwen2",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-31T08:53:49Z | # chaoyang-Qwen2.5-3B-Instruct-5828s-08891 : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `./llama.cpp/llama-cli -hf dreambleumer/chaoyang-Qwen2.5-3B-Instruct-5828s-08891 --jinja`
- For multimodal models: ... | [] |
microsoft/Dayhoff-170M-UR90-HL-24000 | microsoft | 2026-04-02T01:40:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"jamba",
"text-generation",
"protein-generation",
"custom_code",
"dataset:microsoft/Dayhoff",
"arxiv:2502.12479",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-02T01:40:50Z | # Model Card for Dayhoff
Dayhoff is an Atlas of both protein sequence data and generative language models — a centralized resource that brings together 3.34 billion protein sequences across 1.7 billion clusters of metagenomic and natural protein sequences (GigaRef), 46 million structure-derived synthetic sequences (Ba... | [] |
hector-gr/RLCR-v4-ks-adaptive-floor05-bins100-ece100-uniqueness-cold-math | hector-gr | 2026-03-18T15:59:11Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-18T04:35:30Z | # Model Card for RLCR-v4-ks-adaptive-floor05-bins100-ece100-uniqueness-cold-math
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question =... | [] |
mradermacher/DeepSeek-R1-Distill-Llama-8B-Uncensored-GGUF | mradermacher | 2025-02-06T13:06:48Z | 551 | 4 | transformers | [
"transformers",
"gguf",
"en",
"base_model:braindao/DeepSeek-R1-Distill-Llama-8B-Uncensored",
"base_model:quantized:braindao/DeepSeek-R1-Distill-Llama-8B-Uncensored",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-06T04:48:53Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/braindao/DeepSeek-R1-Distill-Llama-8B-Uncensored
<!-- provided-files -->
weighted/imatrix quants are available a... | [] |
leobianco/npov_RM_google_S130104_LLM_false_STRUCT_false_epo3_lr1e-3_r8_2602241629 | leobianco | 2026-02-24T16:33:42Z | 11 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:google/gemma-2-2b-it",
"lora",
"transformers",
"base_model:google/gemma-2-2b-it",
"license:gemma",
"region:us"
] | null | 2026-02-24T16:30:20Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# npov_RM_google_S130104_LLM_false_STRUCT_false_epo3_lr1e-3_r8_2602241629
This model is a fine-tuned version of [google/gemma-2-2b-... | [] |
Alumin-Hydro/Qwen3.5-9B-Physics | Alumin-Hydro | 2026-05-02T03:00:36Z | 0 | 2 | null | [
"safetensors",
"gguf",
"physics",
"Physics",
"text-generation",
"en",
"zh",
"dataset:camel-ai/physics",
"base_model:Qwen/Qwen3.5-9B",
"base_model:quantized:Qwen/Qwen3.5-9B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-05-01T08:55:28Z | # Qwen3.5-9B-Physics
A parameter-efficient fine-tuned LoRA adapter built on **Qwen/Qwen3.5-9B**, optimized for physics problem-solving. Trained with LLaMA Factory on the `camel_physics` dataset.

This repository provides bo... | [] |
dgenes/poca-SoccerTwos | dgenes | 2026-02-13T19:38:34Z | 11 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2026-02-13T19:38:24Z | # **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Document... | [] |
t20e/Transformer | t20e | 2026-04-22T05:09:10Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2026-04-22T05:01:29Z | # Trained Model Info
🚨 Must use this [repo](https://github.com/t20e/AI_projects_and_res/tree/main/Transformer) to use this model's weights.
- Model will translate English to German.
- Trained on only **20%** of the WMT 2014 English-German dataset for 15 epochs.
- Took **~17 hours** to train on a M1 Mac with 32 core... | [] |
Kagarinas/kagarinas_style_LoRA | Kagarinas | 2026-03-23T10:44:03Z | 4 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"re... | text-to-image | 2026-03-23T10:43:47Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - Kagarinas/kagarinas_style_LoRA
<Gallery />
## Model description
These are Kagarinas/kagarinas_s... | [
{
"start": 330,
"end": 334,
"text": "LoRA",
"label": "training method",
"score": 0.7859656810760498
},
{
"start": 477,
"end": 481,
"text": "LoRA",
"label": "training method",
"score": 0.7079099416732788
}
] |
hamilton65/MMed-Llama-3-8B-EnIns | hamilton65 | 2026-05-02T16:04:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"medical",
"conversational",
"en",
"zh",
"ja",
"fr",
"ru",
"es",
"dataset:Henrychur/MMedC",
"dataset:axiong/pmc_llama_instructions",
"arxiv:2402.13963",
"base_model:Henrychur/MMed-Llama-3-8B",
"base_model:finetune:Henrychur/M... | text-generation | 2026-05-02T16:04:40Z | # MMedLM
[💻Github Repo](https://github.com/MAGIC-AI4Med/MMedLM) [🖨️arXiv Paper](https://arxiv.org/abs/2402.13963)
The official model weights for "Towards Building Multilingual Language Model for Medicine".
## Introduction
This repo contains MMed-Llama 3-8B-EnIns, which is based on MMed-Llama 3-8B. We further fin... | [] |
URajinda/Qwen2.5-0.5B-burmese-v1.2 | URajinda | 2025-12-17T13:50:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:URajinda/Qwen2.5-0.5B-burmese-v1.1-merged",
"base_model:finetune:URajinda/Qwen2.5-0.5B-burmese-v1.1-merged",
"endpoints_compatible",
"region:us"
] | null | 2025-12-17T12:00:28Z | # Model Card for Qwen2.5-0.5B-burmese-v1.2
This model is a fine-tuned version of [URajinda/Qwen2.5-0.5B-burmese-v1.1-merged](https://huggingface.co/URajinda/Qwen2.5-0.5B-burmese-v1.1-merged).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeli... | [] |
fdastak/FoodEntity_Hybrid_Lora_unFreezing_v3 | fdastak | 2025-12-01T23:18:50Z | 3 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:dmis-lab/biobert-v1.1",
"lora",
"transformers",
"base_model:dmis-lab/biobert-v1.1",
"region:us"
] | null | 2025-11-30T19:50:37Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FoodEntity_Hybrid_Lora_unFreezing_v3
This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-la... | [] |
SohamK18/data-cleaning-grpo | SohamK18 | 2026-04-06T04:08:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:... | text-generation | 2026-04-06T04:06:55Z | # Model Card for data-cleaning-grpo
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time mac... | [
{
"start": 717,
"end": 721,
"text": "GRPO",
"label": "training method",
"score": 0.7120276093482971
}
] |
Andro0s/stable-diffusion-xl-base-1.0.1 | Andro0s | 2026-04-30T23:41:04Z | 594 | 0 | diffusers | [
"diffusers",
"onnx",
"safetensors",
"text-to-image",
"stable-diffusion",
"arxiv:2307.01952",
"arxiv:2211.01324",
"arxiv:2108.01073",
"arxiv:2112.10752",
"license:openrail++",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2026-04-30T16:14:00Z | # SD-XL 1.0-base Model Card

## Model

[SDXL](https://arxiv.org/abs/2307.01952) consists of an [ensemble of experts](https://arxiv.org/abs/2211.01324) pipeline for latent diffusion:
In a first step, the base model is used to generate (noisy) latents,
which are then further ... | [] |
mradermacher/Qwen3-8B-Tulu-SFT-Dolci-Reasoning-100k-GGUF | mradermacher | 2026-04-16T06:56:12Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Rexhaif/Qwen3-8B-Tulu-SFT-Dolci-Reasoning-100k",
"base_model:quantized:Rexhaif/Qwen3-8B-Tulu-SFT-Dolci-Reasoning-100k",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-16T06:00:36Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
chizhikchi/Spanish_disease_finder | chizhikchi | 2023-03-16T16:34:48Z | 11 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"token-classification",
"biomedical",
"clinical",
"ner",
"es",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-06-21T16:47:26Z | # Disease mention recognizer for Spanish clinical texts 🦠🔬
This model derives from participation of SINAI team in [DISease TExt Mining Shared Task (DISTEMIST)](https://temu.bsc.es/distemist/). The DISTEMIST-entities subtrack required automatically finding disease mentions in clinical cases. Taking into account the l... | [
{
"start": 388,
"end": 400,
"text": "NER approach",
"label": "training method",
"score": 0.7101555466651917
}
] |
tatsuyaaaaaaa/act_so_arm101_grab_red_dice_policy_800k_step | tatsuyaaaaaaa | 2026-05-02T14:05:39Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:tatsuyaaaaaaa/so_arm101_grab_red_dice",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-05-02T14:04:36Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
HuggingFaceFW/finepdfs_edu_classifier_pcm_Latn | HuggingFaceFW | 2025-10-06T05:54:11Z | 5 | 0 | null | [
"safetensors",
"modernbert",
"pc",
"dataset:HuggingFaceFW/finepdfs_fw_edu_labeled",
"license:apache-2.0",
"region:us"
] | null | 2025-10-06T05:53:58Z | ---
language:
- pc
license: apache-2.0
datasets:
- HuggingFaceFW/finepdfs_fw_edu_labeled
---
# FinePDFs-Edu classifier (pcm_Latn)
## Model summary
This is a classifier for judging the educational value of web pages. It was developed to filter and curate educational content from web datasets and was trained on 210968 ... | [] |
mradermacher/voyage-nonstory-9b-041726-i1-GGUF | mradermacher | 2026-04-21T16:06:46Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:LatitudeGames/voyage-nonstory-9b-041726",
"base_model:quantized:LatitudeGames/voyage-nonstory-9b-041726",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-04-21T12:27:49Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
Devy1/MiniLM-cosqa-64 | Devy1 | 2025-09-30T20:48:11Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:9020",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:f... | sentence-similarity | 2025-09-30T20:48:04Z | # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector s... | [] |
mradermacher/reactor-ai-20b-GGUF | mradermacher | 2025-11-27T03:13:05Z | 11 | 0 | transformers | [
"transformers",
"gguf",
"gpt",
"llm",
"arc-labs",
"reactor-ai",
"fine-tuned",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-11-27T01:38:45Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: MXFP4_MOE x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->... | [] |
ThreeSixNine/Llama-3.1-8B-OBLITERATED | ThreeSixNine | 2026-04-23T14:54:41Z | 0 | 0 | null | [
"safetensors",
"llama",
"obliteratus",
"abliteration",
"uncensored",
"obliterate",
"en",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"region:us"
] | null | 2026-04-23T14:54:09Z | # Llama-3.1-8B-OBLITERATED
This model was abliterated using the **`advanced`** method via
[OBLITERATUS](https://github.com/elder-plinius/OBLITERATUS).
| Detail | Value |
|--------|-------|
| Base model | `meta-llama/Llama-3.1-8B` |
| Method | `advanced` |
| Source | obliterate |
## How to Use
```python
from transfo... | [
{
"start": 92,
"end": 103,
"text": "OBLITERATUS",
"label": "training method",
"score": 0.8124514818191528
},
{
"start": 138,
"end": 149,
"text": "OBLITERATUS",
"label": "training method",
"score": 0.7809207439422607
},
{
"start": 724,
"end": 735,
"text": "... |
manancode/opus-mt-es-lt-ctranslate2-android | manancode | 2025-08-17T16:44:08Z | 0 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] | translation | 2025-08-17T16:43:58Z | # opus-mt-es-lt-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-es-lt` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-es-lt
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by*... | [] |
tencent/Youtu-Parsing | tencent | 2026-01-29T03:12:59Z | 150 | 38 | transformers | [
"transformers",
"safetensors",
"youtu_vl",
"text-generation",
"image-text-to-text",
"conversational",
"custom_code",
"arxiv:2601.20430",
"arxiv:2601.19798",
"arxiv:2512.24618",
"base_model:tencent/Youtu-LLM-2B",
"base_model:finetune:tencent/Youtu-LLM-2B",
"license:other",
"region:us"
] | image-text-to-text | 2026-01-23T08:51:17Z | <div align="center">
# <img src="assets/youtu-parsing-logo.png" alt="Youtu-Parsing Logo" height="100px">
[📃 License](https://huggingface.co/tencent/Youtu-Parsing/blob/main/LICENSE.txt) • [👨💻 Code](https://github.com/TencentCloudADP/youtu-parsing) • [🖥️ Demo](https://huggingface.co/spaces/Tencent/Youtu-Parsing) •... | [] |
switcode0/camembert-oml-ner | switcode0 | 2026-03-17T13:35:05Z | 286 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"token-classification",
"generated_from_trainer",
"base_model:almanach/camembert-base",
"base_model:finetune:almanach/camembert-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | token-classification | 2026-03-17T10:03:47Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-oml-ner
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the None datas... | [] |
thu-sail-lab/Time-RCD | thu-sail-lab | 2026-03-31T12:57:16Z | 26 | 1 | null | [
"safetensors",
"time_rcd",
"custom_code",
"region:us"
] | null | 2025-10-18T07:35:43Z | # Towards Foundation Models for Zero-Shot Time Series Anomaly Detection: Leveraging Synthetic Data and Relative Context Discrepancy
This repository contains the implementation of Time-RCD for time series anomaly detection, integrated with the TSB-AD (Time Series Benchmark for Anomaly Detection) datasets.
## Project S... | [] |
chandra1976/vit-facial-expression-fatigue | chandra1976 | 2026-04-29T05:55:13Z | 47 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:mo-thecreator/vit-Facial-Expression-Recognition",
"base_model:finetune:mo-thecreator/vit-Facial-Expression-Recognition",
"model-index",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-11-19T06:40:13Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-facial-expression-fatigue
This model is a fine-tuned version of [mo-thecreator/vit-Facial-Expression-Recognition](https://hug... | [] |
XiAT/MyAwesomeModel-TestRepo | XiAT | 2026-05-01T10:02:37Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"license:mit",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2026-05-01T10:02:16Z | # MyAwesomeModel
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="figures/fig1.png" width="60%" alt="MyAwesomeModel" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="LICENSE" style="m... | [
{
"start": 757,
"end": 770,
"text": "post-training",
"label": "training method",
"score": 0.8278292417526245
}
] |
phanerozoic/argus-edge | phanerozoic | 2026-04-27T16:25:46Z | 0 | 0 | pytorch | [
"pytorch",
"multi-task-perception",
"computer-vision",
"image-classification",
"semantic-segmentation",
"depth-estimation",
"object-detection",
"vision-transformer",
"edge",
"dataset:imagenet-1k",
"dataset:scene_parse_150",
"dataset:sayakpaul/nyu_depth_v2",
"dataset:detection-datasets/coco",... | image-classification | 2026-04-22T23:21:24Z | # Argus-Edge
Multi-task perception on a frozen EUPE-ViT-T backbone. Classification, semantic segmentation, metric depth, object detection, and dense correspondence from a single 5.5-million-parameter encoder.
## Architecture
```
Image → EUPE-ViT-T (frozen, 5.5M) → shared features
... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.