modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
mradermacher/Comet_12B_V.7-i1-GGUF | mradermacher | 2025-12-23T04:33:00Z | 68 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"roleplay",
"en",
"ru",
"base_model:OddTheGreat/Comet_12B_V.7",
"base_model:quantized:OddTheGreat/Comet_12B_V.7",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-08-30T03:51:41Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
Alkatt/LAVLA_S1_test_05 | Alkatt | 2026-03-16T08:11:59Z | 23 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"lavla",
"dataset:Alkatt/so101_CubeToBowl_PickPlace_ASN_V2",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-16T08:11:28Z | # Model Card for lavla
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.... | [] |
lava123456/gr00t-n1.5-oneepisode-f445274d | lava123456 | 2026-03-15T15:57:19Z | 31 | 0 | lerobot | [
"lerobot",
"safetensors",
"groot",
"robotics",
"dataset:qualiaadmin/oneepisode",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-15T15:56:23Z | # Model Card for groot
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.... | [] |
Mathieu-Thomas-JOSSET/joke-finetome-model-gguf-phi4-20260107-044440 | Mathieu-Thomas-JOSSET | 2026-01-07T04:29:19Z | 32 | 0 | null | [
"gguf",
"llama",
"llama.cpp",
"unsloth",
"conversational",
"text-generation",
"dataset:Mathieu-Thomas-JOSSET/the_office_only_michael_finetome_no_le2w_fullnames_v2.jsonl",
"base_model:unsloth/phi-4",
"base_model:quantized:unsloth/phi-4",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-07T03:44:41Z | # joke-finetome-model-gguf-phi4-20260107-044440 : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `./llama.cpp/llama-cli -hf Mathieu-Thomas-JOSSET/joke-finetome-model-gguf-phi4-20260107-044440 --jinja`
- For ... | [] |
lovishag0315/bwc-setfit-classifier | lovishag0315 | 2026-02-08T03:52:03Z | 3 | 0 | setfit | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"text-embeddings-inference",
"endpoints_comp... | text-classification | 2026-02-08T03:51:37Z | # SetFit with sentence-transformers/all-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer em... | [
{
"start": 66,
"end": 72,
"text": "SetFit",
"label": "training method",
"score": 0.7370724081993103
},
{
"start": 166,
"end": 172,
"text": "SetFit",
"label": "training method",
"score": 0.7960917949676514
},
{
"start": 821,
"end": 827,
"text": "SetFit",
... |
luckeciano/Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-8-HessianMaskToken-5e-4-v3_5229 | luckeciano | 2025-09-18T13:25:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"text-generation... | text-generation | 2025-09-18T09:55:59Z | # Model Card for Qwen-2.5-7B-DrGRPO-Adam-FisherMaskToken-1e-8-HessianMaskToken-5e-4-v3_5498
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
... | [] |
KKHYA/qwen3-1.7b-mft-coding-ablation-layers-4-5-6-7 | KKHYA | 2026-04-29T20:50:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"mft",
"generated_from_trainer",
"conversational",
"base_model:KKHYA/qwen3-1.7b-fft-coding",
"base_model:finetune:KKHYA/qwen3-1.7b-fft-coding",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible... | text-generation | 2026-04-29T19:08:36Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen3-1.7b-mft-coding-ablation-layers-4-5-6-7
This model is a fine-tuned version of [KKHYA/qwen3-1.7b-fft-coding](https://hugging... | [] |
crystal0112/air-purifier-function-call-merged | crystal0112 | 2025-09-02T05:43:00Z | 0 | 0 | null | [
"safetensors",
"llama",
"region:us"
] | null | 2025-09-02T05:15:56Z | # llama_function_call_merged_baseline
이 모델은 한국어 음성 명령을 Function Call로 변환하기 위해 fine-tuning된 Llama 3.2 1B 모델입니다.
## 모델 정보
- **Base Model**: Llama 3.2 1B Instruct
- **Fine-tuning**: LoRA (Low-Rank Adaptation)
- **Task**: Function Call Generation
- **Language**: Korean
## 사용법
```python
from transformers import AutoToken... | [] |
ankita182005/smolified-study-burnout-focus-coach-ai | ankita182005 | 2026-03-29T10:14:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"smolify",
"dslm",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-29T10:14:41Z | # 🤏 smolified-study-burnout-focus-coach-ai
> **Intelligence, Distilled.**
This is a **Domain Specific Language Model (DSLM)** generated by the **Smolify Foundry**.
It has been synthetically distilled from SOTA reasoning engines into a high-efficiency architecture, optimized for deployment on edge hardware (CPU/NPU)... | [
{
"start": 473,
"end": 504,
"text": "Proprietary Neural Distillation",
"label": "training method",
"score": 0.7628965973854065
}
] |
pankajrajdeo/BioForge-bioformer-16L-umls-integration | pankajrajdeo | 2025-08-05T01:50:57Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:2945832",
"loss:MultipleNegativesSymmetricMarginLoss",
"arxiv:1908.10084",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-08-05T01:50:51Z | # SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Mod... | [] |
afk-live/afk-setfit-camembert-main-classifier-v1 | afk-live | 2025-12-19T20:17:04Z | 0 | 0 | setfit | [
"setfit",
"safetensors",
"camembert",
"text-classification",
"news-classification",
"fr",
"dataset:custom",
"base_model:almanach/camembert-base",
"base_model:finetune:almanach/camembert-base",
"license:mit",
"region:us"
] | text-classification | 2025-12-19T20:17:01Z | # afk-setfit-camembert-main-classifier-v1
This is a SetFit model fine-tuned for French news classification as part of the AFK.live project.
## Model Details
- **Base Model**: CamemBERT (camembert-base)
- **Task**: Multi-class text classification for news articles
- **Language**: French
- **Framework**: SetFit
## Tr... | [] |
yasper36/lehome-expert-mixture-baseline | yasper36 | 2026-04-21T19:24:48Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-04-21T19:24:33Z | # LeHome Simulation Challenge — Submission
Policy: **Expert Mixture** — a ResNet18 garment-category classifier dispatches each episode to one of four category-specific ACT (Action Chunking Transformer) experts.
## Approach
At evaluation time the garment category label is not provided. Our policy handles this in two ... | [] |
ai4data/datause-extraction | ai4data | 2026-04-15T02:45:44Z | 0 | 0 | gliner2 | [
"gliner2",
"ner",
"data-mention-extraction",
"lora",
"development-economics",
"dataset:ai4data/datause-train",
"base_model:fastino/gliner2-large-v1",
"base_model:adapter:fastino/gliner2-large-v1",
"license:mit",
"region:us"
] | null | 2026-03-11T13:10:42Z | # datause-extraction
Fine-tuned GLiNER2 LoRA adapter for extracting structured data mentions from
development economics and humanitarian research documents.
This is the production release of
[rafmacalaba/gliner2-datause-large-v1-deval-synth-v2](https://huggingface.co/rafmacalaba/gliner2-datause-large-v1-deval-synth-v... | [] |
ayda138000/controlnet_persian_text_v1 | ayda138000 | 2025-09-08T18:19:49Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2025-09-05T09:40:52Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-ayda138000/controlnet_persian_text_v1
These are controlnet weights trained on runwayml/stable-diffusion-v1-5 ... | [] |
arianaazarbal/qwen3-4b-20260119_123154_lc_rh_sot_recon_gen_def_tra-9ebec0-step100 | arianaazarbal | 2026-01-19T14:28:50Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-01-19T14:28:11Z | # qwen3-4b-20260119_123154_lc_rh_sot_recon_gen_def_tra-9ebec0-step100
## Experiment Info
- **Full Experiment Name**: `20260119_123154_leetcode_train_medhard_filtered_rh_simple_overwrite_tests_recontextualization_gen_default_train_code_monkey_oldlp_training_seed42`
- **Short Name**: `20260119_123154_lc_rh_sot_recon_gen... | [] |
nex-agi/DeepSeek-V3.1-Nex-N1.1 | nex-agi | 2026-01-26T12:52:36Z | 14 | 2 | transformers | [
"transformers",
"safetensors",
"deepseek_v3",
"text-generation",
"conversational",
"custom_code",
"arxiv:2512.04987",
"base_model:deepseek-ai/DeepSeek-V3.1-Base",
"base_model:quantized:deepseek-ai/DeepSeek-V3.1-Base",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
... | text-generation | 2026-01-23T09:10:20Z | <div align="center">
<img src="./figures/NEX_logo.svg" width="20%"/>
</div>
---
<div align="center">
🏠 <a href="https://nex.sii.edu.cn"><b>Home Page</b></a>   |   
🤗 <a href="https://hf.co/collections/nex-agi/nex-n1"><b>Model</b></a>   |   
🤗 <a href="https://huggingface.co/data... | [
{
"start": 666,
"end": 677,
"text": "RL training",
"label": "training method",
"score": 0.8824463486671448
}
] |
UnstableLlama/Ministral-3-3B-Reasoning-2512-exl3 | UnstableLlama | 2025-12-26T23:36:25Z | 19 | 0 | null | [
"exl3",
"base_model:mistralai/Ministral-3-3B-Reasoning-2512",
"base_model:quantized:mistralai/Ministral-3-3B-Reasoning-2512",
"license:apache-2.0",
"region:us"
] | null | 2025-12-26T23:30:52Z | <style>
@import url('https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@400;700&display=swap');
.test-container {
font-family: 'JetBrains Mono', 'Fira Code', monospace;
background-color: #0d0d0d;
color: #00ff9f; /* Neon Green text base */
padding: 25px;
border: 1px solid #333;
bor... | [] |
flexitok/unigram_fw_edu_32000 | flexitok | 2026-02-23T03:20:30Z | 0 | 0 | null | [
"tokenizer",
"unigram",
"flexitok",
"fineweb2",
"fw",
"license:mit",
"region:us"
] | null | 2026-02-23T03:20:30Z | # UnigramLM Tokenizer: fw_edu (32K)
A **UnigramLM** tokenizer trained on **fw_edu** data from Fineweb-2-HQ.
## Training Details
| Parameter | Value |
|-----------|-------|
| Algorithm | UnigramLM |
| Language | `fw_edu` |
| Target Vocab Size | 32,000 |
| Final Vocab Size | 0 |
| Pre-tokenizer | ByteLevel |
| Normali... | [
{
"start": 76,
"end": 82,
"text": "fw_edu",
"label": "training method",
"score": 0.712040364742279
}
] |
it4lia/irene | it4lia | 2026-02-27T06:22:44Z | 102 | 0 | pytorch | [
"pytorch",
"weather",
"nowcasting",
"radar",
"precipitation",
"ensemble-forecasting",
"convgru",
"earth-observation",
"image-to-image",
"en",
"license:bsd-2-clause",
"region:us"
] | image-to-image | 2026-02-26T21:34:09Z | # IRENE — Italian Radar Ensemble Nowcasting Experiment
**IRENE** is a ConvGRU encoder-decoder model for short-term precipitation forecasting (nowcasting) from radar data. The model generates probabilistic ensemble forecasts, producing multiple plausible future scenarios from a single input sequence.
## Model Descript... | [] |
distillabs/tft-benchmark-s2-direct-Qwen3-1.7B | distillabs | 2026-04-15T23:11:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"tool-calling",
"multi-turn",
"fine-tuned",
"tft-benchmark",
"conversational",
"en",
"dataset:google-research-datasets/dstc8-schema-guided-dialogue",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"license:apa... | text-generation | 2026-04-15T23:09:05Z | # tft-benchmark-s2-direct-Qwen3-1.7B
A **Qwen3-1.7B** model fine-tuned for multi-turn tool calling as part of the [TFT (Training from Traces) Benchmark](https://github.com/distil-labs/distil-tft-benchmarking).
- **Pipeline**: Direct Training
- **Scenario**: S2 Noisy Labels — Noisy Labels
- **LLM-as-a-judge score**: *... | [
{
"start": 228,
"end": 243,
"text": "Direct Training",
"label": "training method",
"score": 0.9569517970085144
},
{
"start": 833,
"end": 848,
"text": "Direct Training",
"label": "training method",
"score": 0.9502578973770142
},
{
"start": 1468,
"end": 1483,
... |
Hnug/qwen2.5-7b-nlxh-gguf | Hnug | 2026-03-03T11:55:31Z | 26 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-03T11:47:28Z | # Qwen2.5-7B-NLXH-Vietnamese
Đây là phiên bản tinh chỉnh của mô hình **Qwen 2.5 7B** dành riêng cho tác vụ viết văn Nghị luận xã hội tiếng Việt. Mô hình được huấn luyện để hiểu sâu sắc các cấu trúc bài văn nghị luận và sử dụng ngôn ngữ giàu hình ảnh, dẫn chứng.
### 🛠 Chi tiết huấn luyện
* **Dataset:** 444 samples (... | [] |
hadadxyz/OpenSonnet-Lite-GGUF | hadadxyz | 2026-05-04T14:34:33Z | 0 | 6 | llama.cpp | [
"llama.cpp",
"gguf",
"distillation",
"distilled",
"sft",
"peft",
"qwen3",
"opensonnet",
"claude-sonnet",
"sonnet",
"text-generation",
"dataset:Roman1111111/claude-sonnet-4.6-120000x",
"dataset:Roman1111111/claude-sonnet-4.6-100000X-filtered",
"dataset:TeichAI/lordx64-claude-opus-4.7-max-cl... | text-generation | 2026-05-04T14:31:01Z | # Introduction
**A compact yet capable reasoning model. Built for everyday use, even on limited hardware.**
## OpenSonnet-Lite
OpenSonnet-Lite is a lightweight language model fine-tuned from [Qwen/Qwen3-4B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507), designed to deliver strong Chain-of-Though... | [] |
KS150/test105-3 | KS150 | 2026-02-19T15:35:31Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:u-10bei/dbbench_sft_dataset_react_v4",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"re... | text-generation | 2026-02-19T15:32:58Z | # qwen3-4b-agent-trajectory-lora
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen2.5-7B-Instruct** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **multi-turn ... | [
{
"start": 63,
"end": 67,
"text": "LoRA",
"label": "training method",
"score": 0.9129268527030945
},
{
"start": 131,
"end": 135,
"text": "LoRA",
"label": "training method",
"score": 0.9335342645645142
},
{
"start": 177,
"end": 181,
"text": "LoRA",
"lab... |
wangbadao/ppo-Huggy | wangbadao | 2025-11-27T14:33:31Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2025-11-27T14:33:28Z | # **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We... | [] |
AfriScience-MT/m2m100_1.2b-eng-hau | AfriScience-MT | 2026-02-04T22:30:37Z | 0 | 0 | null | [
"safetensors",
"m2m_100",
"translation",
"african-languages",
"scientific-translation",
"afriscience-mt",
"m2m100",
"en",
"ha",
"dataset:afriscience-mt",
"base_model:facebook/m2m100_1.2B",
"base_model:finetune:facebook/m2m100_1.2B",
"license:apache-2.0",
"model-index",
"region:us"
] | translation | 2026-02-04T22:29:22Z | # m2m100_1.2b-eng-hau
[](https://huggingface.co/AfriScience-MT/m2m100_1.2b-eng-hau)
This model is part of the **AfriScience-MT** project, focused on machine translation of scientific texts for African languages.
## Model De... | [] |
mradermacher/Gemma3-Aiacos-1B-i1-GGUF | mradermacher | 2025-12-14T18:29:17Z | 9 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Novaciano/Gemma3-Aiacos-1B",
"base_model:quantized:Novaciano/Gemma3-Aiacos-1B",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-12-14T18:05:27Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
Hiranmai49/Qwen3-8B-G3-AdaptiveEvaluation_DPO | Hiranmai49 | 2025-09-14T23:59:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"dpo",
"trl",
"arxiv:2305.18290",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-14T08:55:07Z | # Model Card for Qwen3-8B-G3-AdaptiveEvaluation_DPO
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but ... | [
{
"start": 173,
"end": 176,
"text": "TRL",
"label": "training method",
"score": 0.7516965270042419
},
{
"start": 725,
"end": 728,
"text": "DPO",
"label": "training method",
"score": 0.82764732837677
},
{
"start": 1015,
"end": 1018,
"text": "DPO",
"labe... |
47z/SmolLM2-1.7B-Instruct-math-lora | 47z | 2026-03-18T11:19:28Z | 20 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"base_model:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"region:us"
] | text-generation | 2026-03-18T11:17:39Z | # Model Card for test-math-lora-adapter
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-1.7B-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = ... | [] |
rbelanec/train_wsc_42_1760450379 | rbelanec | 2025-10-14T14:07:55Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"prefix-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-10-14T14:00:20Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_wsc_42_1760450379
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-l... | [] |
Josephinepassananti/flux_flux_ft_dataset_src_image_target_target_cat_a_0.3_bs1_steps1000 | Josephinepassananti | 2025-12-16T23:36:49Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-12-16T22:52:24Z | <!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux DreamBooth LoRA - Josephinepassananti/flux_flux_ft_dataset_src_image_target_target_cat_a_0.3_bs1_steps1000
<Gallery... | [] |
markzuck999/call-boy-job-chennai-salary-check | markzuck999 | 2026-04-11T03:16:37Z | 0 | 0 | allennlp | [
"allennlp",
"finance",
"question-answering",
"aa",
"dataset:microsoft/orca-agentinstruct-1M-v1",
"base_model:Qwen/Qwen2.5-Coder-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"region:us"
] | question-answering | 2024-12-02T07:54:52Z | <p> CALL BOY JOB CHENNAI PART TIME SALARY </p><p><span style="background-color: white; color: #333333; font-family: "Libre Franklin", "Helvetica Neue", helvetica, arial, sans-serif; font-size: 16px;">call boy job Chennai Are yo</span><span style="background-color: white; box-sizing: inheri... | [] |
superzzs/kairos-sensenova-common | superzzs | 2026-03-17T12:06:51Z | 5 | 0 | diffusers | [
"diffusers",
"diffusion",
"worldmodel",
"video-generation",
"text-to-video",
"zh",
"en",
"license:apache-2.0",
"diffusers:DiffusionPipeline",
"region:us"
] | text-to-video | 2026-03-17T12:06:50Z | # Kairos 3.0
<p align="center">
<img src="assets/logo_kairos.png" width="500"/>
<p>
<p align="center">
💜 <a href="https://kairos.acerobotics.com">Kairos Platform</a>    |    🖥️ <a href="https://github.com/kairos-agi">GitHub</a>    |   🤗 <a href="https://huggingface.co/kairo... | [] |
mradermacher/Qwen2.5-3B-gabliterated-i1-GGUF | mradermacher | 2026-01-08T04:10:44Z | 23 | 1 | transformers | [
"transformers",
"gguf",
"uncensored",
"code",
"legal",
"text-generation-inference",
"en",
"base_model:Goekdeniz-Guelmez/Qwen2.5-3B-gabliterated-Dev",
"base_model:quantized:Goekdeniz-Guelmez/Qwen2.5-3B-gabliterated-Dev",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
... | null | 2025-08-12T17:17:25Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
coastalcph/Qwen2.5-1.5B-1t_gcd_sycophanct-7t_diff_pv_sycophant | coastalcph | 2025-09-01T09:52:39Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"region:us"
] | null | 2025-09-01T09:51:51Z | # Combined Task Vector Model
This model was created by combining task vectors from multiple fine-tuned models.
## Task Vector Computation
```python
t_1 = TaskVector("Qwen/Qwen2.5-1.5B-Instruct", "coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy")
t_2 = TaskVector("Qwen/Qwen2.5-1.5B-Instruct", "coastalcph/Qwen2.5-1.5B... | [] |
Grigorij/smolvla_shoot_watermelon | Grigorij | 2025-10-19T07:55:37Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:Grigorij/shooting_watermelon_v3.0",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-10-19T07:52:25Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
strangervisionhf/excess_layer_pruned-nanonets-1.5b | strangervisionhf | 2025-10-29T10:32:31Z | 132 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"text-generation-inference",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-10-20T20:15:39Z | > [!important]
This is the OCR weight component of the [Nanonets-OCR2-1.5B-exp](https://huggingface.co/nanonets/Nanonets-OCR2-1.5B-exp) model. These weights cannot be used for other use cases. If you wish to do so, please visit the original model page!
Previously, inference with the model [[https://huggingface.co/nano... | [] |
shin0412/jetsonnano-eco-engines | shin0412 | 2026-04-03T07:57:00Z | 0 | 0 | null | [
"region:us"
] | null | 2026-04-03T07:56:49Z | # Jetson Nano ECO TensorRT Engines
TensorRT engine files exported from the Jetson Nano ECO tracking workflow in `JetsonNanoTracking`.
Contents:
- `engines/resnet18_vggmconv1_otb_dyn_fp16.engine`
- `engines/resnet18_vggmconv1_otb_dual_large_fp16.engine`
- `engines/resnet18_vggmconv1_otb_dual_small_fp16.engine`
- `eng... | [] |
forgedRice/ppo-Pyramids | forgedRice | 2025-09-27T10:53:58Z | 6 | 0 | ml-agents | [
"ml-agents",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"curiosity",
"RND",
"region:us"
] | reinforcement-learning | 2025-09-27T10:53:55Z | # **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
This agent uses **Random Network Distillation (RND)** for curiosity-driven exploration to solve the Pyramids environment, where ... | [
{
"start": 210,
"end": 237,
"text": "Random Network Distillation",
"label": "training method",
"score": 0.7874794006347656
}
] |
DarrenJiaImbue/editlens-qwen3-4b-merged | DarrenJiaImbue | 2026-04-30T21:21:21Z | 11 | 0 | null | [
"safetensors",
"qwen3",
"editlens",
"text-classification",
"ai-text-detection",
"arxiv:2510.03154",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"license:cc-by-nc-sa-4.0",
"region:us"
] | text-classification | 2026-04-30T21:20:29Z | # editlens-qwen3-4b-merged
Qwen3-4B fine-tuned with QLoRA on the [pangram/editlens_iclr](https://huggingface.co/datasets/pangram/editlens_iclr) dataset, with the LoRA adapter merged into the base in bf16. Drop-in replacement for the QLoRA path with **~1.5× lower single-request latency** on RTX 3090 and no accuracy reg... | [] |
vincenthugging/flux-dev-lora-lyf | vincenthugging | 2024-09-04T08:49:44Z | 12 | 4 | diffusers | [
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-08-27T08:23:52Z | # flux dev lora lyf
<Gallery />
## Model description
A lora of liuyifei(a famous actress of China) based on Flux1.D
## Trigger words
You should use `lyf` to trigger the image generation.

#... | [] |
FT-LLM-2026-RAMEN/hsr-video-lora-wan21 | FT-LLM-2026-RAMEN | 2026-03-07T01:51:37Z | 50 | 0 | peft | [
"peft",
"safetensors",
"lora",
"base_model:Wan-AI/Wan2.1-T2V-1.3B-Diffusers",
"base_model:adapter:Wan-AI/Wan2.1-T2V-1.3B-Diffusers",
"region:us"
] | null | 2026-03-06T11:46:45Z | # HSR Video LoRA - WAN2.1 T2V 1.3B
LoRA fine-tuned WAN2.1-T2V-1.3B for HSR robot manipulation video generation.
## Training Details
- Base model: Wan-AI/Wan2.1-T2V-1.3B-Diffusers
- LoRA rank: 64, alpha: 64
- Training objective: Flow matching velocity prediction
- Training videos: 638 HSR robot episodes
- Epochs: 30, ... | [] |
HUMADEX/spanish_medical_ner | HUMADEX | 2025-06-04T13:14:42Z | 282 | 1 | null | [
"pytorch",
"safetensors",
"bert",
"NER",
"medical",
"symptom",
"extraction",
"spanish",
"token-classification",
"es",
"dataset:HUMADEX/spanish_ner_dataset",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"region:us"
] | token-classification | 2024-10-10T12:56:47Z | # Spanish Medical NER
## Acknowledgement
This model had been created as part of joint research of HUMADEX research group (https://www.linkedin.com/company/101563689/) and has received funding by the European Union Horizon Europe Research and Innovation Program project SMILE (grant number 101080923) and Marie Skłodows... | [] |
kagyvro48/pi0fast_so101_dataset1_policy | kagyvro48 | 2025-09-22T22:21:56Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"pi0fast",
"robotics",
"dataset:kagyvro48/so101_dataset1_arracher_les_mauvaises_herbes",
"arxiv:2501.09747",
"license:apache-2.0",
"region:us"
] | robotics | 2025-09-22T22:19:11Z | # Model Card for pi0fast
<!-- Provide a quick summary of what the model is/does. -->
[Pi0-Fast](https://huggingface.co/papers/2501.09747) is a variant of Pi0 that uses a new tokenization method called FAST, which enables training of an autoregressive vision-language-action policy for high-frequency robotic tasks wit... | [
{
"start": 17,
"end": 24,
"text": "pi0fast",
"label": "training method",
"score": 0.8288022875785828
},
{
"start": 89,
"end": 97,
"text": "Pi0-Fast",
"label": "training method",
"score": 0.8568876385688782
},
{
"start": 204,
"end": 208,
"text": "FAST",
... |
Felladrin/gguf-Q4_K_S-MiniCPM4-0.5B-QAT-Int4-unquantized | Felladrin | 2025-10-04T17:46:38Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zh",
"en",
"base_model:openbmb/MiniCPM4-0.5B-QAT-Int4-unquantized",
"base_model:quantized:openbmb/MiniCPM4-0.5B-QAT-Int4-unquantized",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-10-04T17:46:34Z | # Felladrin/MiniCPM4-0.5B-QAT-Int4-unquantized-Q4_K_S-GGUF
This model was converted to GGUF format from [`openbmb/MiniCPM4-0.5B-QAT-Int4-unquantized`](https://huggingface.co/openbmb/MiniCPM4-0.5B-QAT-Int4-unquantized) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) ... | [] |
k1000dai/residualact_libero_smolvla_spatial | k1000dai | 2025-08-29T01:03:24Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"residualact",
"robotics",
"dataset:k1000dai/libero-spatial-smolvla",
"license:apache-2.0",
"region:us"
] | robotics | 2025-08-22T11:28:07Z | # Model Card for residualact
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggin... | [] |
AlexanderKyng/Devstral-Small-2-24B-Instruct-2512-exl3-4.5bpw-optimized | AlexanderKyng | 2025-12-10T22:32:53Z | 9 | 2 | vllm | [
"vllm",
"safetensors",
"mistral3",
"mistral-common",
"arxiv:2501.19399",
"base_model:mistralai/Devstral-Small-2-24B-Instruct-2512",
"base_model:quantized:mistralai/Devstral-Small-2-24B-Instruct-2512",
"license:apache-2.0",
"exl3",
"region:us"
] | null | 2025-12-10T22:19:29Z | # Devstral Small 2 24B Instruct 2512- ExLlamaV3 4.5bit
ExLlamaV3 quantization of Devstral Small 2 24B Instruct 2512 model.
## Specifications
- **Format**: ExLlamaV3
- **Bits**: 4.5-bit (8-bit heads - optimized)
- **Size**: ~16GB
- **Compatible with**: TabbyAPI, ExLlamaV3
## Usage
### TabbyAPI
```zsh
# Place files i... | [] |
MattBou00/llama-3-2-1b-detox_v1b-checkpoint-epoch-20 | MattBou00 | 2025-08-19T21:01:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | 2025-08-19T21:00:13Z | # TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL... | [] |
adnanmd76/islamic-ai-foundation | adnanmd76 | 2026-02-04T01:22:55Z | 12 | 1 | transformers | [
"transformers",
"bert",
"text-classification",
"islamic-ai",
"quran",
"hadith",
"fiqh",
"abjad",
"adanid-ecosystem",
"foundation-model",
"multilingual",
"noor-e-abjad",
"tajweed",
"jannah-points",
"dataset:ADANiD/Quranlab-islamic-dataset",
"dataset:adnanmd76/nooreabjad-dataset",
"bas... | text-classification | 2026-02-03T18:00:38Z | # 🌙 Islamic AI Foundation Model
> **World's first foundation model for comprehensive Islamic knowledge processing with Noor-e-Abjad integration**
## 🧠 Enhanced Capabilities
- **Quranic Analysis**: Recitation, Tajweed correction, Abjad validation with Jannah Points
- **Hadith Processing**: Authentication, classifica... | [] |
CallMcMargin/Qwen3-VLTO-8B-Instruct-mlx-bf16 | CallMcMargin | 2025-11-09T07:27:00Z | 3 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:qingy2024/Qwen3-VLTO-8B-Instruct",
"base_model:finetune:qingy2024/Qwen3-VLTO-8B-Instruct",
"region:us"
] | text-generation | 2025-11-09T07:26:31Z | # CallMcMargin/Qwen3-VLTO-8B-Instruct-mlx-bf16
This model [CallMcMargin/Qwen3-VLTO-8B-Instruct-mlx-bf16](https://huggingface.co/CallMcMargin/Qwen3-VLTO-8B-Instruct-mlx-bf16) was
converted to MLX format from [qingy2024/Qwen3-VLTO-8B-Instruct](https://huggingface.co/qingy2024/Qwen3-VLTO-8B-Instruct)
using mlx-lm version... | [] |
contemmcm/9381bb52744d6f1fb47d5b9ad5990418 | contemmcm | 2025-11-23T23:57:12Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-23T23:26:11Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 9381bb52744d6f1fb47d5b9ad5990418
This model is a fine-tuned version of [FacebookAI/roberta-large](https://huggingface.co/Facebook... | [] |
mradermacher/mistralai_Ministral-3-8B-Instruct-2512-abliterated-GGUF | mradermacher | 2026-03-01T13:14:32Z | 1,733 | 3 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Nitral-Archive/mistralai_Ministral-3-8B-Instruct-2512-bf16-abliterated",
"base_model:quantized:Nitral-Archive/mistralai_Ministral-3-8B-Instruct-2512-bf16-abliterated",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-02-27T12:22:24Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: 1 -->
static ... | [] |
mradermacher/Psychosis-14B-v0-MAGIC-INVERT-GGUF | mradermacher | 2025-12-29T17:57:41Z | 3 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Naphula/Psychosis-14B-v0-MAGIC-INVERT",
"base_model:quantized:Naphula/Psychosis-14B-v0-MAGIC-INVERT",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-29T17:21:46Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
jharis97/Qwen2.5-0.5B-Q4_K_M-GGUF | jharis97 | 2025-11-12T17:12:50Z | 2 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:quantized:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-11-12T17:12:44Z | # jharis97/Qwen2.5-0.5B-Q4_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen2.5-0.5B`](https://huggingface.co/Qwen/Qwen2.5-0.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwe... | [] |
manancode/opus-mt-uk-pt-ctranslate2-android | manancode | 2025-08-12T23:52:29Z | 0 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] | translation | 2025-08-12T23:52:15Z | # opus-mt-uk-pt-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-uk-pt` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-uk-pt
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by*... | [] |
jaruiz/ppo-LunarLander-v3 | jaruiz | 2025-09-03T13:26:50Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2025-09-03T13:26:45Z | # PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-... | [] |
ishikaa/Chinese_qwen3b-da | ishikaa | 2026-01-13T19:56:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"feature-extraction",
"zh",
"hi",
"en",
"arxiv:2601.06307",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2026-01-13T19:48:11Z | Chinese-to-English translation model trained with GRPO with MTQE rewards. Works well on idiom translation, non-idiomatic translation, and for other languages as well.
```
from vllm import LLM, SamplingParams
sampling_params = SamplingParams(temperature=0.3, max_tokens=512)
llm = LLM('ishikaa/Chinese_qwen3b-da', tensor... | [] |
contemmcm/2d921873baea05afbcdcf67debdf7580 | contemmcm | 2025-10-28T12:52:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/mbart-large-50",
"base_model:finetune:facebook/mbart-large-50",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-10-28T12:39:02Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2d921873baea05afbcdcf67debdf7580
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/... | [] |
SeifElden2342532/Code-Optimizer | SeifElden2342532 | 2026-03-31T20:09:21Z | 55 | 1 | peft | [
"peft",
"safetensors",
"qlora",
"fine-tune",
"code-optimization",
"qwen",
"code-generation",
"llm",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2026-03-31T19:53:23Z | # Code Optimization Fine-tuned Qwen2.5-Coder-7B-Instruct (LoRA Adapter)
This repository contains a fine-tuned LoRA adapter for the `Qwen/Qwen2.5-Coder-7B-Instruct` model, specialized for Python code optimization. The model was fine-tuned using QLoRA on the `SeifElden2342532/Code-Optimization` dataset.
## Model Descri... | [] |
NghiemAbe/Vi-Legal-PhoBert | NghiemAbe | 2024-04-30T17:52:18Z | 11 | 1 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"legal",
"phobert",
"vi",
"dataset:NghiemAbe/Legal-corpus-indexing",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-04-30T16:14:22Z | # Phobert Base model with Legal domain
**Experiment performed with Transformers version 4.38.2**\
Vi-Legal-PhoBert model for Legal domain based on [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2), then continued MLM pretraining for 154600 steps with token-level on [Legal Corpus](https://huggingface... | [] |
lisellaare/detr_finetuned_hw | lisellaare | 2026-04-21T21:18:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2026-04-21T20:51:23Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr_finetuned_hw
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50)... | [] |
jordanpainter/dialect-llama-gspo-ind | jordanpainter | 2026-04-03T18:50:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"base_model:jordanpainter/diallm-llama-sft-ind",
"base_model:finetune:jordanpainter/diallm-llama-sft-ind",
"text-generation-inference",
"endpoints_compatible",
"region:us"
... | text-generation | 2026-04-03T18:44:56Z | # Model Card for gspo_llama_ind
This model is a fine-tuned version of [jordanpainter/diallm-llama-sft-ind](https://huggingface.co/jordanpainter/diallm-llama-sft-ind).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you ha... | [] |
SufficientPrune3897/Gemma-3-12B-Character-Creator-V2 | SufficientPrune3897 | 2026-03-19T23:41:06Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"roleplay",
"sillytavern",
"characters",
"conversational",
"en",
"base_model:p-e-w/gemma-3-12b-it-heretic-v2",
"base_model:finetune:p-e-w/gemma-3-12b-it-heretic-v2",
"license:apache-2.0",... | image-text-to-text | 2026-03-19T22:28:56Z | This is a model made to create characters that can be used in Sillytavern, cai, jai and other such roleplay scenarios. The resulting characters should be about ~2k tokens and follow a prebaked structure.
Versions:
- 8B llama 3.3 based and [GGUFs](https://huggingface.co/SufficientPrune3897/Llama-3.3-8B-Character-Creato... | [] |
Adanato/Meta-Llama-3-8B-Instruct_qwen25_qwen3_diff_only-qwen25_qwen3_diff_only_cluster_4 | Adanato | 2026-02-11T10:27:43Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"text-generation-inference",
"endpoint... | text-generation | 2026-02-11T10:24:45Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Meta-Llama-3-8B-Instruct_e1_qwen25_qwen3_diff_only_cluster_4
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-In... | [] |
alpcansoydas/whisper-large-v2-tr-ft-03-04-26-full-ft-50ksamples-simulated-data | alpcansoydas | 2026-04-03T15:18:13Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"tr",
"base_model:openai/whisper-large-v2",
"base_model:finetune:openai/whisper-large-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2026-04-03T12:15:13Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper for Turkish Call Centers
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/wh... | [] |
WindyWord/translate-vi-ru | WindyWord | 2026-04-21T14:55:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"vietnamese",
"russian",
"vi",
"ru",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | translation | 2026-04-21T14:54:30Z | # WindyWord.ai Translation — Vietnamese → Russian
**Translates Vietnamese → Russian.**
**Quality Rating: ⭐⭐⭐⭐½ (4.5★ Premium)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs.
## Quality & Pricing Tier
- **5-star rating:** 4.5★ ⭐⭐⭐⭐½
- **Tier:** Premium
- **... | [] |
mradermacher/RENT-Qwen-7B-i1-GGUF | mradermacher | 2025-12-05T00:32:46Z | 160 | 1 | transformers | [
"transformers",
"gguf",
"RL",
"Unsupervised",
"Reasoning",
"en",
"dataset:Maxwell-Jia/AIME_2024",
"base_model:aippolit/RENT-Qwen-7B",
"base_model:quantized:aippolit/RENT-Qwen-7B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-10-31T18:25:54Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
mradermacher/Mira-v1.24.2-27B-Karcher-i1-GGUF | mradermacher | 2026-04-18T17:39:37Z | 3,622 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Lambent/Mira-v1.24.2-27B-Karcher",
"base_model:quantized:Lambent/Mira-v1.24.2-27B-Karcher",
"license:gemma",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-02-01T09:16:28Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
HauhauCS/Qwen3VL-8B-Uncensored-HauhauCS-Balanced | HauhauCS | 2026-04-05T19:01:12Z | 3,183 | 8 | null | [
"gguf",
"uncensored",
"qwen3",
"vision",
"multimodal",
"en",
"zh",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-31T16:55:44Z | # Qwen3VL-8B-Uncensored-HauhauCS-Balanced
> **[Join the Discord](https://discord.gg/SZ5vacTXYf)** for updates, roadmaps, projects, or just to chat.
Qwen3VL-8B uncensored by HauhauCS.
## About
No changes to datasets or capabilities. Fully functional, 100% of what the original authors intended - just without the refu... | [] |
Tribewarez/pot-o-slim-greenhouse-666 | Tribewarez | 2026-05-01T06:22:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"pytorch",
"text-generation-inference",
"tensor-optimization",
"proof-of-tensor",
"pot-o",
"embedded-ai",
"low-resource",
"green-ai",
"edge-inference",
"quantization",
"tribewarez",
"live-beta",
"en",
"dataset:Tribewarez/syn... | text-generation | 2026-05-01T06:22:14Z | # pot-o-slim-greenhouse-666
**666k-parameter GPT-2-style causal LM** designed to run on the least capable
hardware available — old CUDA cards, edge nodes, recycled compute clusters —
while still producing useful PoT-O path predictions.
Lineage target: **666,666** (symbolic). Enumerated parameters: **666,504**
(delta ... | [] |
mradermacher/qwen-story-model-GGUF | mradermacher | 2026-04-19T14:39:40Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"sft",
"trl",
"en",
"base_model:Rateesh12/qwen-story-model",
"base_model:quantized:Rateesh12/qwen-story-model",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-19T14:24:56Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
MoonRide/gemma-4-E4B-it-heretic-ara | MoonRide | 2026-04-11T10:33:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma4",
"image-text-to-text",
"heretic",
"uncensored",
"decensored",
"abliterated",
"ara",
"any-to-any",
"base_model:google/gemma-4-E4B-it",
"base_model:finetune:google/gemma-4-E4B-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | any-to-any | 2026-04-11T10:01:41Z | # This is a decensored version of [google/gemma-4-E4B-it](https://huggingface.co/google/gemma-4-E4B-it), made using [Heretic](https://github.com/p-e-w/heretic) v1.2.0 with the [Arbitrary-Rank Ablation (ARA)](https://github.com/p-e-w/heretic/pull/211) method (with row-norm preservation)
## Abliteration parameters
| Pa... | [] |
shipitirl/UltraShape | shipitirl | 2026-04-06T03:38:56Z | 0 | 0 | null | [
"image-to-3d",
"arxiv:2512.21185",
"base_model:tencent/Hunyuan3D-2.1",
"base_model:finetune:tencent/Hunyuan3D-2.1",
"license:apache-2.0",
"region:us"
] | image-to-3d | 2026-04-06T03:38:55Z | 
<h1>UltraShape 1.0 Refine Model</h1>
<a href="https://arxiv.org/pdf/2512.21185"><img src="https://img.shields.io/badge/arXiv-2512.21185-b31b1b.svg?style=flat-square" alt="arXiv"></a>
<a href="https://pk... | [] |
WithinUsAI/Qwen3-0.6B-Qrazy-Qoder | WithinUsAI | 2026-03-21T22:12:13Z | 14 | 1 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"code",
"coder",
"reasoning",
"withinusai",
"en",
"dataset:microsoft/rStar-Coder",
"dataset:open-r1/codeforces-cots",
"dataset:nvidia/OpenCodeReasoning",
"dataset:patrickfleith/instruction-freak-reasoning",
"base_model:Qwen/Qwen3-0... | text-generation | 2026-02-07T02:21:00Z | # Qwen3-0.6B-Qrazy-Qoder
**Qwen3-0.6B-Qrazy-Qoder** is a compact coding- and reasoning-oriented language model release from **WithIn Us AI**, built on top of **`Qwen/Qwen3-0.6B`** and packaged as a standard **Transformers** checkpoint in **Safetensors** format.
This model is intended for lightweight coding assistance... | [] |
andrevp/MiniCPM-o-4_5-MLX-4bit | andrevp | 2026-02-14T12:12:27Z | 451 | 4 | mlx | [
"mlx",
"safetensors",
"minicpmo",
"vision",
"multimodal",
"vlm",
"minicpm",
"apple-silicon",
"quantized",
"audio",
"tts",
"speech",
"whisper",
"streaming",
"real-time",
"screen-capture",
"image-text-to-text",
"conversational",
"custom_code",
"en",
"zh",
"id",
"fr",
"de"... | image-text-to-text | 2026-02-13T14:32:26Z | # MiniCPM-o 4.5 — MLX 4-bit Quantized (Full Multimodal)
4-bit quantized [MLX](https://github.com/ml-explore/mlx) conversion of [openbmb/MiniCPM-o-4_5](https://huggingface.co/openbmb/MiniCPM-o-4_5) for fast inference on Apple Silicon (M1/M2/M3/M4).
Includes **all modalities**: vision, audio input (Whisper), TTS output... | [] |
mradermacher/KernelGen-LM-32B-RL-GGUF | mradermacher | 2026-01-28T07:02:38Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:AscendKernelGen/KernelGen-LM-32B-RL",
"base_model:quantized:AscendKernelGen/KernelGen-LM-32B-RL",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-28T06:29:27Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
Veronyka/radar-social-lgbtqia | Veronyka | 2025-12-14T02:24:54Z | 1 | 0 | null | [
"safetensors",
"bert",
"hate-speech-detection",
"lgbtqia+",
"portuguese",
"text-classification",
"ódio",
"hate",
"queer",
"lgbt",
"trans",
"lgbtphobia",
"homophobia",
"queerpgobia",
"lgbtfobia",
"homofobia",
"pt",
"dataset:Veronyka/base-dados-odio-lgbtqia",
"base_model:rufimelo/L... | text-classification | 2025-10-11T17:01:35Z | # 📚 Versão Legada - Radar Social LGBTQIA+ V1
## ⚠️ Este é um modelo legado
Este repositório contém a primeira versão do modelo de detecção de discurso de ódio contra pessoas LGBTQIA+.
**Este modelo foi substituído pelo modelo atualizado.**
---
## 🎯 Modelo Atual
**Use o modelo mais recente**: [**TybyrIA v2.1**](... | [] |
wangkanai/wan22-fp16-i2v-gguf | wangkanai | 2025-10-27T07:44:06Z | 534 | 1 | diffusers | [
"diffusers",
"gguf",
"wan",
"image-to-video",
"video-generation",
"arxiv:2503.20314",
"license:other",
"region:us"
] | image-to-video | 2025-10-14T09:33:12Z | <!-- README Version: v1.3 -->
# Wan 2.2 Image-to-Video (I2V-A14B) - GGUF FP16 Quantized Models
This repository contains GGUF quantized versions of the **Wan 2.2 Image-to-Video A14B** model, optimized for efficient inference with reduced VRAM requirements while maintaining high-quality video generation capabilities.
... | [] |
kerr0x23/1505dnp48-5K-3 | kerr0x23 | 2025-10-16T07:14:22Z | 0 | 0 | null | [
"region:us"
] | null | 2025-10-16T07:07:22Z | # Container Template for SoundsRight Subnet Miners
Miners in [Bittensor's](https://bittensor.com/) [SoundsRight Subnet](https://github.com/synapsec-ai/soundsright-subnet) must containerize their models before uploading to HuggingFace. This repo serves as a template.
The branches `DENOISING_16000HZ` and `DEREVERBERATI... | [] |
algorembrant/72_RL_graphical_representations | algorembrant | 2026-04-08T01:47:55Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2026-04-08T01:42:22Z | # Reinforcement Learning Graphical Representations
This repository contains a full set of 72 visualizations representing foundational concepts, algorithms, and advanced topics in Reinforcement Learning.
| Category | Component | Illustration | Details | Context |
|----------|-----------|--------------|---------|------... | [] |
Tasfiya025/Ocean-Buoy-Log-Captioner | Tasfiya025 | 2025-12-15T09:18:40Z | 0 | 0 | null | [
"region:us"
] | null | 2025-12-15T09:16:53Z | # Ocean-Buoy-Log-Captioner
## Overview
`Ocean-Buoy-Log-Captioner` is a **Vision-Encoder-Decoder** model designed for multimodal log analysis. It is specifically trained to generate descriptive text summaries (captions) of environmental events by jointly processing structured sensor readings (input features) and a shor... | [] |
topaanbgs/rl_course_vizdoom_health_gathering_supreme | topaanbgs | 2026-02-10T13:11:33Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2026-02-10T13:11:25Z | A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sam... | [
{
"start": 7,
"end": 11,
"text": "APPO",
"label": "training method",
"score": 0.7742170691490173
},
{
"start": 634,
"end": 638,
"text": "APPO",
"label": "training method",
"score": 0.7969266176223755
},
{
"start": 712,
"end": 754,
"text": "rl_course_vizdoo... |
kcxain/translator-Llama-3-8B | kcxain | 2025-11-13T09:32:03Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:finetune:meta-llama/Meta-Llama-3-8B",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"... | text-generation | 2025-11-13T09:13:53Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# round4
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on t... | [] |
nadeem1362/mxbai-embed-large-v1-Q4_K_M-GGUF | nadeem1362 | 2024-05-23T12:00:24Z | 19 | 1 | sentence-transformers | [
"sentence-transformers",
"gguf",
"mteb",
"transformers.js",
"transformers",
"llama-cpp",
"gguf-my-repo",
"feature-extraction",
"en",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-23T12:00:21Z | # nadeem1362/mxbai-embed-large-v1-Q4_K_M-GGUF
This model was converted to GGUF format from [`mixedbread-ai/mxbai-embed-large-v1`](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original... | [] |
mradermacher/Mistral-Nemo-2407-12B-Thinking-Claude-Gemini-GPT5.2-Uncensored-HERETIC-i1-GGUF | mradermacher | 2026-01-10T04:08:36Z | 2,626 | 4 | transformers | [
"transformers",
"gguf",
"uncensored",
"heretic",
"abliterated",
"finetune",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fiction story",
"science fiction",
"romance",
"all genres"... | null | 2026-01-10T02:49:02Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
contemmcm/a02745c90c6bd40f1208ccb5a6cc7fc4 | contemmcm | 2025-11-16T02:42:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-classification",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-14T04:23:32Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# a02745c90c6bd40f1208ccb5a6cc7fc4
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.... | [
{
"start": 496,
"end": 504,
"text": "F1 Macro",
"label": "training method",
"score": 0.7331404685974121
},
{
"start": 1319,
"end": 1327,
"text": "F1 Macro",
"label": "training method",
"score": 0.7036627531051636
}
] |
baseten-admin/llama-8b-lora-unsloth | baseten-admin | 2025-11-25T20:06:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"endpoints_compatible",
"region:us"
] | null | 2025-11-25T20:05:23Z | # Model Card for llama-8b-lora-unsloth
This model is a fine-tuned version of [unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit](https://huggingface.co/unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transforme... | [] |
okezieowen/orpheus_full_finetune_v1 | okezieowen | 2025-10-03T23:02:51Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"ig",
"yo",
"ha",
"en",
"dataset:okezieowen/open_slr_ng_en_for_eurydice",
"dataset:hypaai/euchrates_synthetic_data",
"dataset:okezieowen/asr_pidgin_en_for_eurydice",
"dat... | text-generation | 2025-10-02T22:14:24Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# good_collaboration
This model is a fine-tuned version of [canopylabs/orpheus-3b-0.1-pretrained](https://huggingface.co/canopylabs... | [] |
ukung/qwen3-4B-finetune-sintesis-dataset | ukung | 2026-01-20T18:21:58Z | 5 | 0 | null | [
"safetensors",
"gguf",
"qwen3",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-20T18:07:25Z | # qwen3-4B-finetune-sintesis-dataset : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `./llama.cpp/llama-cli -hf ukung/qwen3-4B-finetune-sintesis-dataset --jinja`
- For multimodal models: `./llama.cpp/llama-... | [
{
"start": 106,
"end": 113,
"text": "Unsloth",
"label": "training method",
"score": 0.7195923924446106
},
{
"start": 144,
"end": 151,
"text": "unsloth",
"label": "training method",
"score": 0.7332707047462463
},
{
"start": 568,
"end": 575,
"text": "unsloth... |
sillykiwi/Bamba-9B-v2-Q4_K_S-GGUF | sillykiwi | 2025-10-27T18:43:07Z | 24 | 0 | transformers | [
"transformers",
"gguf",
"bamba",
"llama-cpp",
"gguf-my-repo",
"dataset:allenai/dolma",
"dataset:allenai/olmo-mix-1124",
"dataset:allenai/dolmino-mix-1124",
"dataset:HuggingFaceTB/smollm-corpus",
"base_model:ibm-ai-platform/Bamba-9B-v2",
"base_model:quantized:ibm-ai-platform/Bamba-9B-v2",
"lice... | null | 2025-10-27T18:42:40Z | # sillykiwi/Bamba-9B-v2-Q4_K_S-GGUF
This model was converted to GGUF format from [`ibm-ai-platform/Bamba-9B-v2`](https://huggingface.co/ibm-ai-platform/Bamba-9B-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://hug... | [] |
rohanksaxena/opus-mt-en-fr | rohanksaxena | 2026-04-21T19:44:02Z | 0 | 0 | null | [
"translation",
"ctranslate2",
"opus-mt",
"unreal-engine",
"en",
"fr",
"license:cc-by-4.0",
"region:us"
] | translation | 2026-04-21T19:06:19Z | # opus-mt-en-fr (CTranslate2 INT8)
CTranslate2 INT8 quantized conversion of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr)
for use with the Unreal Engine Offline Translator Plugin.
## Usage
This model is intended to be used with the Unreal Engine Offline Translator plugin.
It... | [] |
mlchen/stable-diffusion-2-1-base | mlchen | 2026-03-24T03:14:45Z | 42 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"text-to-image",
"arxiv:2112.10752",
"arxiv:2202.00512",
"arxiv:1910.09700",
"license:openrail++",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2026-03-24T03:14:45Z | # Stable Diffusion v2-1-base Model Card
This model card focuses on the model associated with the Stable Diffusion v2-1-base model.
This `stable-diffusion-2-1-base` model fine-tunes [stable-diffusion-2-base](https://huggingface.co/stabilityai/stable-diffusion-2-base) (`512-base-ema.ckpt`) with 220k extra steps taken, w... | [] |
orbbecwuxin/act-grab-red-cube-orbbec | orbbecwuxin | 2026-02-27T11:54:42Z | 25 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:orbbecwuxin/record-grab-red-cube-orbbec",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-27T11:53:32Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
qing-yao/baseline_nb300k_70m_ep1_lr1e-4_seed42 | qing-yao | 2025-12-29T04:01:09Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"base_model:finetune:EleutherAI/pythia-70m",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-29T04:00:46Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# baseline_nb300k_70m_ep1_lr1e-4_seed42
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/Eleuth... | [] |
UnstableLlama/gemma-4-31B-it-exl3-8.00bpw | UnstableLlama | 2026-04-10T21:53:22Z | 0 | 0 | null | [
"safetensors",
"gemma4",
"exl3",
"base_model:google/gemma-4-31B-it",
"base_model:quantized:google/gemma-4-31B-it",
"license:apache-2.0",
"8-bit",
"region:us"
] | null | 2026-04-10T15:14:51Z | <style>
@import url('https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@400;700&family=Inter:wght@400;700&display=swap');
.dashboard-container {
font-family: 'Inter', sans-serif;
width: min(1500px, calc(100vw - 32px));
max-width: 100%;
margin: 0 auto;
box-sizing: border-box;
backg... | [] |
LLM-course/ParetoTinyRNNTransformers97k_v4_ramp_TRM_d80_L1_H2_C4_100k_LegalW0p5 | LLM-course | 2026-01-19T15:43:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"chess_transformer",
"text-generation",
"chess",
"llm-course",
"chess-challenge",
"custom_code",
"license:mit",
"region:us"
] | text-generation | 2026-01-19T14:48:47Z | ## Chess model submitted to the LLM Course Chess Challenge.
### Submission Info
- **Submitted by**: [janisaiad](https://huggingface.co/janisaiad)
- **Parameters**: 97,440
- **Organization**: LLM-course
### Model Details
- **Architecture**: Tiny Recursive Model (TRM) - looping recurrent transformer (cycle-shared weigh... | [] |
jc10086/faster-whisper-large-v3-turbo | jc10086 | 2026-04-28T08:14:44Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2026-04-28T08:14:44Z | # Whisper large-v3 turbo model for CTranslate2
This repository contains the conversion of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTransla... | [] |
azam2u/detect_orange | azam2u | 2026-02-02T07:00:29Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:azam2u/data_orange",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-02T06:59:53Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
HassanShehata/logem-win | HassanShehata | 2025-08-14T18:57:13Z | 0 | 2 | null | [
"cybersecurity",
"siem",
"windows",
"evtx",
"event-logs",
"xml-parsing",
"security-automation",
"fine-tuned",
"windows-security",
"text-generation",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-08-14T17:00:08Z | <img src="https://cdn-uploads.huggingface.co/production/uploads/689df7f27100a16137c1ea74/W0dL_0BFtxE9yZLvmWkID.png" width="700">
# LLMSIEM/logem-win
LLMSIEM/logem-win is a specialized language model fine-tuned specifically for Windows Event Log (EVTX) analysis and field extraction. Built for Windows-centric security ... | [] |
abharadwaj123/ddpm-cifar10-32-finetuned-1000steps-20251204 | abharadwaj123 | 2025-12-04T18:10:02Z | 0 | 0 | null | [
"region:us"
] | null | 2025-12-04T18:09:55Z | # ddpm-cifar10-32-finetuned-1000steps-20251204
Fine-tuned DDPM model based on `google/ddpm-cifar10-32`.
## Training Details
- **Base Model**: google/ddpm-cifar10-32
- **Training Scenario**: representative_mix (20% clean + 80% corrupted CIFAR-10)
- **Corruptions**: 4 representative types at severity 3
- **Training Ste... | [] |
Leniv4ik/test | Leniv4ik | 2026-04-28T07:19:17Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-cased",
"base_model:finetune:distilbert/distilbert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | token-classification | 2026-04-27T08:48:45Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.