modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
aimi-models/triposr | aimi-models | 2026-04-22T16:50:12Z | 0 | 0 | pytorch | [
"pytorch",
"onnx",
"image-to-3d",
"triposr",
"rembg",
"mirror",
"license:mit",
"region:us"
] | image-to-3d | 2026-04-22T16:49:49Z | # TripoSR Mirror (A.I.M.I)
Mirror of [stabilityai/TripoSR](https://huggingface.co/stabilityai/TripoSR) plus [rembg](https://github.com/danielgatis/rembg)'s u2net weight, re-hosted for stable URLs inside the [A.I.M.I](https://aimi.app) desktop product. Contents are unmodified — this repo exists purely to shield our in-... | [] |
micrictor/gemma-3-270m-it-memorize-lppl-5p_of_params | micrictor | 2025-12-31T03:06:48Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-30T18:12:36Z | # Model Card for gemma-3-270m-it-memorize-lppl-5p_of_params
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If yo... | [] |
facebook/mask2former-swin-tiny-coco-instance | facebook | 2023-09-11T20:46:03Z | 90,545 | 14 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mask2former",
"vision",
"image-segmentation",
"dataset:coco",
"arxiv:2112.01527",
"arxiv:2107.06278",
"license:other",
"endpoints_compatible",
"deploy:azure",
"region:us"
] | image-segmentation | 2022-12-23T11:15:51Z | # Mask2Former
Mask2Former model trained on COCO instance segmentation (tiny-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/M... | [] |
songphucn7/me5-checkthat-task1-v1.1 | songphucn7 | 2026-04-10T12:59:43Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:17319",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1807.03748",
"base_model:intfloat/multilingual-e5-large",
"base_model:finetune:int... | sentence-similarity | 2026-04-10T12:59:22Z | # SentenceTransformer based on intfloat/multilingual-e5-large
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used fo... | [] |
nightmedia/Qwen3-Coder-Next-qx53n-mlx | nightmedia | 2026-02-10T19:16:51Z | 228 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3_next",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-Coder-Next",
"base_model:quantized:Qwen/Qwen3-Coder-Next",
"license:apache-2.0",
"5-bit",
"region:us"
] | text-generation | 2026-02-03T19:48:56Z | # Qwen3-Coder-Next-qx53n-mlx
The Qwen3-Coder-Next outperforms the previous Next models with ease.
The mxfp4 is head and shoulders above the old Next q8, establishing itself as the highest performing quant so far.
Brainwaves
```
arc arc/e boolq hswag obkqa piqa wino
qx86n-hi 0.518,0.710,0.882,0.626,0.416,... | [] |
Satomako7/your-lora-repo-03 | Satomako7 | 2026-02-11T11:27:11Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-11T11:26:44Z | qwen3-4b-structured-output-lora-03
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve ... | [
{
"start": 136,
"end": 141,
"text": "QLoRA",
"label": "training method",
"score": 0.8007701635360718
}
] |
EvgenyShivchenkoUIT/mms-tts-eng | EvgenyShivchenkoUIT | 2026-04-15T06:51:41Z | 0 | 0 | null | [
"pytorch",
"safetensors",
"vits",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"region:us"
] | text-to-speech | 2026-04-15T06:51:25Z | ---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): English Text-to-Speech
This repository contains the **English (eng)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org... | [
{
"start": 1849,
"end": 1869,
"text": "adversarial training",
"label": "training method",
"score": 0.7716607451438904
}
] |
l27335over/Qwen3.5-0.8B | l27335over | 2026-03-12T22:59:48Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5",
"image-text-to-text",
"conversational",
"base_model:Qwen/Qwen3.5-0.8B-Base",
"base_model:finetune:Qwen/Qwen3.5-0.8B-Base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-03-12T22:59:47Z | # Qwen3.5-0.8B
<img width="400px" src="https://qianwen-res.oss-accelerate.aliyuncs.com/logo_qwen3.5.png">
[](https://chat.qwen.ai)
> [!Note]
> This repository contains model weights and configuration files for the post-trained mo... | [] |
Xhdyecwig/mochi-1-preview | Xhdyecwig | 2026-04-06T13:05:37Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"video",
"genmo",
"text-to-video",
"en",
"license:apache-2.0",
"diffusers:MochiPipeline",
"region:us"
] | text-to-video | 2026-04-06T13:05:36Z | # Mochi 1
[Blog](https://www.genmo.ai/blog) | [Direct Download](https://weights.genmo.dev/weights.zip) | [Hugging Face Download](https://huggingface.co/genmo/mochi-1-preview/tree/main) | [Playground](https://www.genmo.ai/play) | [Careers](https://jobs.ashbyhq.com/genmo)
A state of the art video generation model by [Ge... | [] |
MattBou00/llama-3-2-1b-detox_v1f_RRETRT_Again_ROUND2 | MattBou00 | 2025-09-22T12:29:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | 2025-09-22T12:28:19Z | # TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL... | [] |
SeaniMoxxu/distil_refin_td5_parser | SeaniMoxxu | 2025-10-11T10:33:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
... | text-classification | 2025-10-11T10:04:00Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distil_refin_td5_parser
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/di... | [] |
mastefan/2025-24679-image-autolguon-predictor | mastefan | 2025-09-22T09:42:53Z | 0 | 0 | transformers | [
"transformers",
"generated_from_trainer",
"image_classification",
"office_supplies",
"binary_class",
"image-text-to-text",
"en",
"dataset:0408happyfeet/p3hw1-pen-detection",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-09-22T07:51:19Z | # 2025-24679-image-autolguon-predictor
## Model description
Purpose:
This model to be used for in-class assignments and activity associated with Course 24679 at CMU.
Preprocessing/Augmentation:
The preprocessing of this data includes splitting the dataset into train and
test, and using autoML to predict whe... | [
{
"start": 299,
"end": 305,
"text": "autoML",
"label": "training method",
"score": 0.7580577731132507
}
] |
ZLSCompLing/VITS2-Claude | ZLSCompLing | 2026-01-29T16:30:47Z | 5 | 0 | null | [
"text-to-speech",
"tts",
"vits2",
"luxembourgish",
"lb",
"license:mit",
"region:us"
] | text-to-speech | 2026-01-29T15:33:13Z | # VITS2 - Claude (Luxembourgish Gender-Neutral Voice)
A VITS2-based text-to-speech model for Luxembourgish, featuring a synthetic gender-neutral voice.
## Model Description
This model was trained using the VITS2 architecture on Luxembourgish speech data from the [Lëtzebuerger Online Dictionnaire (LOD)](https://lod.l... | [] |
y1y2y3/first_act | y1y2y3 | 2025-08-28T07:10:18Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:y1y2y3/so101_test",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-08-28T02:11:40Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
benchang1110/mamba2-370m-hf | benchang1110 | 2025-08-21T13:33:32Z | 26 | 0 | transformers | [
"transformers",
"safetensors",
"base_model:state-spaces/mamba2-370m",
"base_model:finetune:state-spaces/mamba2-370m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-21T13:08:57Z | ## How to Get Started with the Model
Use the code below to get started with the model.
```python
import torch
from transformers import AutoTokenizer
from transformers import Mamba2ForCausalLM
if __name__ == "__main__":
device = "cuda"
model_id = "benchang1110/mamba2-370m-hf"
tokenizer = AutoTokenize... | [] |
Satyawan1/praxis-auscult | Satyawan1 | 2026-05-04T15:19:33Z | 0 | 0 | pytorch | [
"pytorch",
"automatic-speech-recognition",
"asr",
"conformer",
"ctc",
"medical",
"clinical",
"nhs",
"gp",
"primary-care",
"english",
"en-GB",
"primock57",
"praxis",
"praxis-auscult",
"auscult",
"en",
"dataset:primock57",
"license:other",
"model-index",
"region:us"
] | automatic-speech-recognition | 2026-04-10T07:11:33Z | # PRAXIS-AUSCULT — Conformer-CTC ASR for British GP Audio
A **117M-parameter (live inference)** Conformer-CTC speech recogniser trained from scratch on **British General Practice consultation audio** (PriMock57). Built as the ASR component of **PRAXIS** — an MSc dissertation system at the University of Leicester for A... | [] |
Thireus/gemma-4-31B-it-THIREUS-IQ6_K-SPECIAL_SPLIT | Thireus | 2026-04-20T04:01:50Z | 0 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"region:us"
] | null | 2026-04-20T02:16:48Z | # gemma-4-31B-it
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/gemma-4-31B-it-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the gemma-4-31B-it model (official repo: https://huggingface.co/google/gemma-4-31B-it). These GGUF shards are designed ... | [] |
Rafa-Troncoso-A/gemma-2-9b-1-CreditExpert-GC | Rafa-Troncoso-A | 2026-04-16T20:35:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"base_model:unsloth/gemma-2-9b-it-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-9b-it-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2026-04-16T20:35:44Z | # Model Card for gemma-2-9b-1-CreditExpert-GC
This model is a fine-tuned version of [unsloth/gemma-2-9b-it-bnb-4bit](https://huggingface.co/unsloth/gemma-2-9b-it-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If ... | [] |
edgarkim/act_so101_test_0130 | edgarkim | 2026-02-02T01:46:11Z | 1 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:edgarkim/so101_test_0130",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-02T01:45:55Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.8059530854225159
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8365488052368164
},
{
"start": 883,
"end": 886,
"text": "act",
"label"... |
laconic-llm/LACONIC-Deepscaler-1.5B-2000 | laconic-llm | 2026-04-13T22:14:38Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"dataset:agentica-org/DeepScaleR-Preview-Dataset",
"arxiv:2602.14468",
"base_model:agentica-org/DeepScaleR-1.5B-Preview",
"base_model:finetune:agentica-org/DeepScaleR-1.5B-Preview",
"license:apache-2.0",
"region:us"
] | null | 2026-04-13T21:36:22Z | # LACONIC-DeepScaleR-1.5B-2000
This repository hosts **LACONIC-DeepScaleR-1.5B-2000**, a LACONIC-trained variant of **agentica-org/DeepScaleR-1.5B-Preview**.
LACONIC is a length-aware reinforcement learning method for making LLM responses substantially shorter while preserving task performance. During training, it co... | [
{
"start": 160,
"end": 167,
"text": "LACONIC",
"label": "training method",
"score": 0.8368501663208008
},
{
"start": 505,
"end": 512,
"text": "LACONIC",
"label": "training method",
"score": 0.8346515893936157
},
{
"start": 765,
"end": 772,
"text": "LACONIC... |
mradermacher/MIDI-LLM_Llama-3.2-1B-GGUF | mradermacher | 2025-10-31T09:28:30Z | 88 | 1 | transformers | [
"transformers",
"gguf",
"music",
"midi",
"text-to-music",
"text-to-midi",
"llama",
"en",
"base_model:slseanwu/MIDI-LLM_Llama-3.2-1B",
"base_model:quantized:slseanwu/MIDI-LLM_Llama-3.2-1B",
"license:llama3.2",
"endpoints_compatible",
"region:us"
] | null | 2025-10-31T09:18:09Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
ideepankarsharma2003/sdxl-naruto-lora_2 | ideepankarsharma2003 | 2025-11-30T12:50:38Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"dataset:lambdalabs/naruto-blip-captions",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"endpoints_compatible",
"region:us"
] | null | 2025-11-25T10:24:35Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sdxl-naruto-lora_2
This model is a fine-tuned version of [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabil... | [] |
aariciah/gpt2-portuguese-dutch-configC-6k | aariciah | 2026-03-17T18:01:40Z | 763 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:zhuojing-huang/gpt2-portuguese-20k",
"base_model:finetune:zhuojing-huang/gpt2-portuguese-20k",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-31T15:38:06Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-portuguese-dutch-configC-6k
This model is a fine-tuned version of [zhuojing-huang/gpt2-portuguese-20k](https://huggingface.c... | [] |
wvnvwn/qwen-2.5-7B-only-rsn-tuned-lr3e-5 | wvnvwn | 2026-04-28T12:29:31Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"safety",
"fine-tuning",
"llama",
"safety-neurons",
"license:apache-2.0",
"region:us"
] | null | 2026-04-28T12:28:14Z | # qwen-2.5-7B-only-rsn-tuned-lr3e-5
This is a Safety Neuron-Tuned (SN-Tune) version of Llama-3.2-3B-Instruct.
## Model Description
- **Base Model**: meta-llama/Llama-3.2-3B-Instruct
- **Fine-tuning Method**: SN-Tune (Safety Neuron Tuning)
- **Training Data**: Circuit Breakers dataset (safety alignment data)
- **Uplo... | [
{
"start": 68,
"end": 75,
"text": "SN-Tune",
"label": "training method",
"score": 0.9154999852180481
},
{
"start": 211,
"end": 218,
"text": "SN-Tune",
"label": "training method",
"score": 0.9537633657455444
},
{
"start": 363,
"end": 370,
"text": "SN-Tune",... |
crislmfroes/smolvla-boris-open-range-rl-v6 | crislmfroes | 2026-01-10T09:56:19Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:crislmfroes/boris-open-range-rl-v6",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-02T19:06:01Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
cross-encoder/ms-marco-MiniLM-L2-v2 | cross-encoder | 2025-08-29T14:36:35Z | 1,010,540 | 14 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"jax",
"onnx",
"safetensors",
"openvino",
"bert",
"text-classification",
"transformers",
"text-ranking",
"en",
"dataset:sentence-transformers/msmarco",
"base_model:cross-encoder/ms-marco-MiniLM-L12-v2",
"base_model:quantized:cross-encoder/ms-marco-MiniLM... | text-ranking | 2022-03-02T23:29:05Z | # Cross-Encoder for MS Marco
This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a... | [] |
yixuan-nv/direct_insertion_0422_dp | yixuan-nv | 2026-04-23T09:30:50Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"diffusion",
"dataset:yixuan-nv/direct_insertion_0422",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-23T09:30:29Z | # Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has ... | [] |
HyzeAI/HyzeQwenInstruct-Q8_0-GGUF | HyzeAI | 2026-04-14T01:42:50Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"instruct",
"coding",
"research",
"qwen",
"hyze",
"Hitesh",
"https://chat.hyze.dev",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"en",
"base_model:HyzeAI/HyzeQwenInstruct",
"base_model:quantized:HyzeAI/HyzeQwenInstruct",
"license:apac... | image-text-to-text | 2026-04-14T01:41:46Z | # HyzeAI/HyzeQwenInstruct-Q8_0-GGUF
This model was converted to GGUF format from [`HyzeAI/HyzeQwenInstruct`](https://huggingface.co/HyzeAI/HyzeQwenInstruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface... | [] |
oliverdk/Qwen2.5-32B-Instruct-user-male-context-distill-revised-seed1 | oliverdk | 2025-11-12T03:29:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-11-12T03:22:54Z | # Model Card for Qwen2.5-32B-Instruct-user-male-context-distill-revised-seed1
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pip... | [] |
priorcomputers/qwen2.5-14b-instruct-cn-dat-kr0.1-a0.5-creative | priorcomputers | 2026-02-10T20:48:42Z | 1 | 0 | null | [
"safetensors",
"qwen2",
"creativityneuro",
"llm-creativity",
"mechanistic-interpretability",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2026-02-10T20:46:39Z | # qwen2.5-14b-instruct-cn-dat-kr0.1-a0.5-creative
This is a **CreativityNeuro (CN)** modified version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct).
## Model Details
- **Base Model**: Qwen/Qwen2.5-14B-Instruct
- **Modification**: CreativityNeuro weight scaling
- **Prompt Set**: dat... | [] |
Junewoo/pi0-push-stack-sep-v2.6 | Junewoo | 2026-02-11T01:21:19Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"pi0",
"dataset:mlcf-robot/franka_actions",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-11T01:14:10Z | # Model Card for pi0
<!-- Provide a quick summary of what the model is/does. -->
**π₀ (Pi0)**
π₀ is a Vision-Language-Action model for general robot control, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀ represents a breakthrough ... | [] |
Darkdev007/twitter-sentiment-distilbert | Darkdev007 | 2026-01-24T17:21:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-01-24T11:11:49Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-sentiment-distilbert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-ba... | [] |
ArneH/harrier-semantic-v1 | ArneH | 2026-04-05T17:49:17Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"qwen3",
"sentence-similarity",
"feature-extraction",
"swiss-law",
"legal-retrieval",
"dense-retrieval",
"de",
"fr",
"it",
"dataset:voilaj/swiss-caselaw",
"license:cc0-1.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2026-04-05T14:20:07Z | # harrier-semantic-v1
**Semantic retrieval model for Swiss court decisions** — fine-tuned on 55,000 (Sachverhalt → cited decision) pairs.
Understands natural language descriptions of legal situations in German, French, and Italian
and retrieves relevant Swiss Federal Court (BGer/BGE) decisions.
## Performance (Seman... | [] |
Isk5434/qwen3-4b-structured-output-lora3 | Isk5434 | 2026-02-23T04:45:17Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-23T04:45:12Z | <qwen3-4b-structured-output-lora-onlydown>
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to ... | [
{
"start": 144,
"end": 149,
"text": "QLoRA",
"label": "training method",
"score": 0.7649816274642944
}
] |
mrxmars/torchmol_encoder_pretrain | mrxmars | 2025-10-20T22:50:51Z | 0 | 0 | torch_molecule | [
"torch_molecule",
"region:us"
] | null | 2025-10-20T22:46:02Z | # AttrMaskMolecularEncoder Model
## Model Description
- **Model Type**: AttrMaskMolecularEncoder
- **Framework**: torch_molecule
- **Last Updated**: 2025-10-21
## Task Summary
| Task | Version | Last Updated | Parameters | Metrics |
|------|---------|--------------|------------|----------|
| default | 0.0.2 | 2025-10... | [] |
Quantized/tiny-agnews-model | Quantized | 2025-09-23T19:04:10Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-09-23T18:47:38Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-agnews-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased)... | [] |
henryliang3027/Qwen2.5-VL-3B-Custom-size-768-improved-reward-500epochs | henryliang3027 | 2025-11-06T15:37:46Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-11-06T05:49:59Z | # Model Card for Qwen2.5-VL-3B-Custom-size-768-improved-reward-500epochs
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipe... | [
{
"start": 799,
"end": 803,
"text": "GRPO",
"label": "training method",
"score": 0.729585587978363
}
] |
hemanth00700/YOLOv8-nano-aadhar-card | hemanth00700 | 2026-04-26T19:42:03Z | 0 | 0 | ultralytics | [
"ultralytics",
"object-detection",
"yolov8",
"pytorch",
"pickle",
"license:apache-2.0",
"model-index",
"region:us"
] | object-detection | 2026-04-26T19:41:45Z | # YOLOv8 model to detect import texts on an Aadhar Card
## Overview
Aadhaar Card text detection is the process of identifying and extracting text from Aadhaar Card images. This can be useful for a variety of applications, such as automatic data entry, fraud detection, and document verification.
One approach to Aadha... | [] |
ElizabethMwangi/sw_nonstandard_tune_whisper_large_4 | ElizabethMwangi | 2025-11-12T13:48:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-11-12T13:47:12Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sw_nonstandard_tune_whisper_large_4
This model was trained from scratch on an unknown dataset.
## Model description
More inform... | [] |
0rn0/gpt2-30m-tinystories | 0rn0 | 2026-02-11T05:10:02Z | 15 | 0 | pytorch | [
"pytorch",
"safetensors",
"gpt2",
"tinystories",
"from-scratch",
"causal-lm",
"text-generation",
"en",
"dataset:fhswf/TinyStoriesV2_cleaned",
"license:mit",
"region:us"
] | text-generation | 2026-02-11T04:48:48Z | # GPT-2 30M — TinyStories
A 30M parameter GPT-2 model trained from scratch on the [TinyStoriesV2 (cleaned)](https://huggingface.co/datasets/fhswf/TinyStoriesV2_cleaned) dataset. Built as a learning project to understand PyTorch and transformer architectures deeply.
## Model Details
| Parameter | Value |
|---|---|
| ... | [] |
k1000dai/smolvla_libero_object_scratch | k1000dai | 2025-08-27T07:08:49Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:k1000dai/libero-object",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-08-27T07:08:10Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
asksolz/murmor-qwen25-0b5-4bit-mlx | asksolz | 2026-04-27T08:01:36Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen2",
"chat",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-04-27T06:24:33Z | # mlx-community/Qwen2.5-0.5B-Instruct-4bit
The Model [mlx-community/Qwen2.5-0.5B-Instruct-4bit](https://huggingface.co/mlx-community/Qwen2.5-0.5B-Instruct-4bit) was converted to MLX format from [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) using mlx-lm version **0.18.1**.
## Use with... | [] |
phantoms2026/phantoms-medgemma-4b | phantoms2026 | 2026-03-24T20:32:00Z | 33 | 0 | null | [
"gguf",
"gemma3",
"llama.cpp",
"unsloth",
"vision-language-model",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-24T20:32:00Z | # phantoms-medgemma-4b : GGUF
This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**:
- For text only LLMs: `llama-cli -hf ashraf20/phantoms-medgemma-4b --jinja`
- For multimodal models: `llama-mtmd-cli -hf ashraf20/phantoms-medgemma-4b --jinj... | [
{
"start": 92,
"end": 99,
"text": "Unsloth",
"label": "training method",
"score": 0.7328064441680908
}
] |
Muapi/flat-art-corporate-memphis-flux | Muapi | 2025-08-27T03:23:14Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-27T03:23:05Z | # Flat Art & Corporate Memphis (Flux)

**Base model**: Flux.1 D
**Trained words**: fla8 style
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
hea... | [] |
Maziger1/assignment4_ModernBertLarge_clinc | Maziger1 | 2025-10-19T20:00:41Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-large",
"base_model:finetune:answerdotai/ModernBERT-large",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-10-19T19:17:54Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# assignment4_ModernBertLarge_clinc
This model is a fine-tuned version of [answerdotai/ModernBERT-large](https://huggingface.co/ans... | [] |
FiveC/BartPho-Bahnar-DeleteOriginal | FiveC | 2026-02-18T12:02:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:vinai/bartpho-syllable",
"base_model:finetune:vinai/bartpho-syllable",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2026-02-18T09:24:39Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BartPho-Bahnar-DeleteOriginal
This model is a fine-tuned version of [vinai/bartpho-syllable](https://huggingface.co/vinai/bartpho... | [] |
netcat420/Falcon-H1R-7B-Heretic | netcat420 | 2026-03-15T12:20:23Z | 65 | 0 | transformers | [
"transformers",
"safetensors",
"falcon_h1",
"text-generation",
"conversational",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-15T00:34:27Z | went from baseline 100/100 refusals to 86/100 refusals
I wish to improve this number at some point
UPDATE! i just picked up more hours at work, and can take another shot at abliteration! I work a manual labor job and have to rent a G4 gpu in colab to do this because i had to disable KV cache for this model to proper... | [] |
sidgyl/MyGemmaNPC | sidgyl | 2025-08-15T08:39:14Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-15T05:05:18Z | # Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could ... | [] |
wikilangs/lt | wikilangs | 2026-01-14T22:53:35Z | 0 | 0 | wikilangs | [
"wikilangs",
"nlp",
"tokenizer",
"embeddings",
"n-gram",
"markov",
"wikipedia",
"feature-extraction",
"sentence-similarity",
"tokenization",
"n-grams",
"markov-chain",
"text-mining",
"fasttext",
"babelvec",
"vocabulous",
"vocabulary",
"monolingual",
"family-baltic",
"text-gener... | text-generation | 2026-01-14T22:52:53Z | # Lithuanian - Wikilangs Models
## Comprehensive Research Report & Full Ablation Study
This repository contains NLP models trained and evaluated by Wikilangs, specifically on **Lithuanian** Wikipedia data.
We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and word embeddings.
## 📋 Repositor... | [
{
"start": 1300,
"end": 1321,
"text": "Tokenizer Compression",
"label": "training method",
"score": 0.7137018442153931
}
] |
arithmetic-circuit-overloading/Llama-3.3-70B-Instruct-3d-1M-100K-0.2-reverse-padzero-plus-mul-sub-99-128D-1L-2H-512I | arithmetic-circuit-overloading | 2026-02-26T20:52:22Z | 536 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:finetune:meta-llama/Llama-3.3-70B-Instruct",
"license:llama3.3",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-26T20:26:53Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.3-70B-Instruct-3d-1M-100K-0.2-reverse-padzero-plus-mul-sub-99-128D-1L-2H-512I
This model is a fine-tuned version of [meta... | [] |
TinyLlama/TinyLlama-1.1B-Chat-v0.6 | TinyLlama | 2023-11-20T11:22:36Z | 5,342 | 111 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"conversational",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:OpenAssistant/oasst_top1_2023-08-25",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"deploy:... | text-generation | 2023-11-20T08:59:23Z | <div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-0... | [] |
NikolayKozloff/MiniCPM4.1-8B-Q5_K_S-GGUF | NikolayKozloff | 2025-09-08T12:32:28Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zh",
"en",
"base_model:openbmb/MiniCPM4.1-8B",
"base_model:quantized:openbmb/MiniCPM4.1-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-09-08T12:32:04Z | # NikolayKozloff/MiniCPM4.1-8B-Q5_K_S-GGUF
This model was converted to GGUF format from [`openbmb/MiniCPM4.1-8B`](https://huggingface.co/openbmb/MiniCPM4.1-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingf... | [] |
WindyWord/translate-sv-zne | WindyWord | 2026-04-28T00:03:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"swedish",
"zande",
"sv",
"zne",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | translation | 2026-04-19T05:50:27Z | # WindyWord.ai Translation — Swedish → Zande
**Translates Swedish → Zande.**
**Quality Rating: ⭐⭐½ (2.5★ Basic)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs.
## Quality & Pricing Tier
- **5-star rating:** 2.5★ ⭐⭐½
- **Tier:** Basic
- **Composite score:**... | [] |
mradermacher/ShayariAI-GGUF | mradermacher | 2025-08-18T15:05:01Z | 14 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Mainakjsr/ShayariAI",
"base_model:quantized:Mainakjsr/ShayariAI",
"license:ecl-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T15:03:39Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
chomeed/peg_insertion_success_dp_rabc_delta2_oracle_256_unprivileged_fixed | chomeed | 2026-04-22T12:59:29Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"rabc_diffusion",
"robotics",
"dataset:chomeed/peg_insertion_full_100_unprivileged_fixed_lerobot_rabc_delta2",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-22T12:58:46Z | # Model Card for rabc_diffusion
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://hug... | [] |
aoiandroid/got-ocr2-onnx | aoiandroid | 2026-02-16T04:17:15Z | 0 | 0 | null | [
"onnx",
"region:us"
] | null | 2026-02-16T04:16:52Z | # GOT-OCR2 ONNX Export (stepfun-ai/GOT-OCR2_0)
This directory contains ONNX exports produced from **stepfun-ai/GOT-OCR2_0** using [BaofengZan/GOT-OCRv2-onnx](https://github.com/BaofengZan/GOT-OCRv2-onnx) (llm-export).
## Contents
- **got_ocr2_vision_encoder.onnx** (+ .onnx.data): Vision encoder. Input: images; outpu... | [] |
bianomendonca/gemma-3-4b-PDBG-without_dist-LogKdKi_separador | bianomendonca | 2026-03-10T21:17:00Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-4b-pt",
"base_model:finetune:google/gemma-3-4b-pt",
"endpoints_compatible",
"region:us"
] | null | 2026-03-04T01:17:43Z | # Model Card for gemma-3-4b-PDBG-without_dist-LogKdKi_separador
This model is a fine-tuned version of [google/gemma-3-4b-pt](https://huggingface.co/google/gemma-3-4b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If yo... | [] |
AnonymousCS/populism_classifier_bsample_097 | AnonymousCS | 2025-08-28T17:15:30Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-08-28T17:14:31Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# populism_classifier_bsample_097
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/Facebo... | [] |
gft/ttm4hvac-target-chaotic | gft | 2025-11-20T11:02:17Z | 0 | 0 | granite_tsfm | [
"granite_tsfm",
"safetensors",
"tinytimemixer",
"ttm4hvac",
"tsfm",
"digital twin",
"hvac",
"energy",
"experiment",
"time-series-forecasting",
"dataset:gft/ttm4hvac-target-chaotic-train",
"dataset:gft/ttm4hvac-target-heat-test",
"dataset:gft/ttm4hvac-target-cool-test",
"base_model:ibm-gran... | time-series-forecasting | 2025-11-18T12:26:28Z | # TTM4HVAC – TinyTimeMixer for HVAC dynamics modeling
This repository contains the **TTM4HVAC – Target-Chaotic** fine-tuned TinyTimeMixer model.
It corresponds to the **“target-chaotic” experiment** described in the TTM4HVAC paper, where the model is fine-tuned using **chaotic exploratory control** data collected fr... | [] |
dphn/Dolphin-Llama3-8B-Instruct-exl2-6bpw | dphn | 2025-04-28T16:49:44Z | 40 | 19 | null | [
"safetensors",
"llama",
"en",
"license:llama3",
"6-bit",
"exl2",
"region:us"
] | null | 2025-04-26T19:20:55Z | # Dolphin Llama 3 8B Instruct 🐬
[](https://discord.gg/cognitivecomputations)
Discord: https://discord.gg/cognitivecomputations
Website: https://dphn.ai
Twitter: h... | [] |
mradermacher/Gemma-3-1B-it-GLM-4.7-Flash-Heretic-Uncensored-Thinking-i1-GGUF | mradermacher | 2026-02-01T09:43:53Z | 1,255 | 4 | transformers | [
"transformers",
"gguf",
"uncensored",
"heretic",
"abliterated",
"unsloth",
"finetune",
"All use cases",
"bfloat16",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fiction story",
... | null | 2026-02-01T09:01:27Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
Christoferson/qwen2.5-500M-grpo-after-sft-both-basic-3000-3e-5-20251216-153659 | Christoferson | 2025-12-16T19:25:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"grpo",
"trl",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2025-12-16T15:55:24Z | # Model Card for qwen2.5-500M-grpo-after-sft-both-basic-3000-3e-5-20251216-153659
This model is a fine-tuned version of [unsloth/qwen2.5-0.5b-instruct](https://huggingface.co/unsloth/qwen2.5-0.5b-instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformer... | [] |
mradermacher/Dhriti-AI-V3-GGUF | mradermacher | 2026-01-24T12:54:35Z | 29 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:FABgaming2025/Dhriti-AI-V3",
"base_model:quantized:FABgaming2025/Dhriti-AI-V3",
"endpoints_compatible",
"region:us"
] | null | 2026-01-24T11:59:24Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
Tarxxxxxx/TX-16G | Tarxxxxxx | 2026-01-12T02:04:14Z | 8 | 1 | transformers | [
"transformers",
"gguf",
"local-ai",
"privacy",
"llm",
"tarx",
"flagship",
"text-generation",
"en",
"license:apache-2.0",
"region:us",
"conversational"
] | text-generation | 2025-12-01T06:02:53Z | # TX-16G
**Maximum local capability. Runs on 16GB RAM.**
TX-16G is TARX's flagship model, offering the best reasoning and generation quality available for local inference.
## Model Details
| Property | Value |
|----------|-------|
| **Parameters** | 14B |
| **Quantization** | Minimal (near full precision) |
| **RAM... | [] |
Gautam0898/crisiscompute-blog | Gautam0898 | 2026-04-26T10:16:25Z | 0 | 0 | null | [
"region:us"
] | null | 2026-04-26T07:38:04Z | # CrisisCompute: Teaching AI Agents to Negotiate Under Pressure
**OpenEnv Hackathon India 2026** — Theme #1 (Multi-Agent Interactions) + Theme #4 (Self-Improvement)
> What happens when three AI agents must share a single GPU, race against deadlines, and survive mid-episode crises — all while learning to trust (or... | [] |
google/tapnet | google | 2025-05-06T15:52:22Z | 0 | 11 | null | [
"vision",
"tracking",
"arxiv:2306.08637",
"arxiv:2402.00847",
"arxiv:2504.05579",
"license:apache-2.0",
"region:us"
] | null | 2025-05-01T07:18:45Z | # TAPNet
This repository contains the checkpoints of several point tracking models developed by DeepMind for point tracking.
🔗 **Code**: [https://github.com/google-deepmind/tapnet](https://github.com/google-deepmind/tapnet)
## Included Models
[**TAPIR**](https://deepmind-tapir.github.io/) – A fast and accurate p... | [
{
"start": 252,
"end": 257,
"text": "TAPIR",
"label": "training method",
"score": 0.8261350989341736
},
{
"start": 407,
"end": 412,
"text": "TAPIR",
"label": "training method",
"score": 0.8075600862503052
},
{
"start": 608,
"end": 613,
"text": "TAPIR",
... |
tsqn/Z-Image-Turbo_GGUF | tsqn | 2025-12-12T04:33:40Z | 190 | 1 | null | [
"gguf",
"text-to-image",
"image-generation",
"comfyui",
"quantization",
"quant",
"en",
"arxiv:2511.22699",
"arxiv:2511.22677",
"arxiv:2511.13649",
"base_model:Tongyi-MAI/Z-Image-Turbo",
"base_model:quantized:Tongyi-MAI/Z-Image-Turbo",
"license:apache-2.0",
"region:us"
] | text-to-image | 2025-12-11T16:31:33Z | ```bibtex
@article{team2025zimage,
title={Z-Image: An Efficient Image Generation Foundation Model with Single-Stream Diffusion Transformer},
author={Z-Image Team},
journal={arXiv preprint arXiv:2511.22699},
year={2025}
}
@article{liu2025decoupled,
title={Decoupled DMD: CFG Augmentation as the Spear,... | [] |
Lordplay/tv24-football-pose-detection | Lordplay | 2026-02-23T16:32:25Z | 0 | 0 | null | [
"region:us"
] | null | 2026-02-23T16:32:22Z | # 🚀 Example Chute for Turbovision 🪂
This repository demonstrates how to deploy a **Chute** via the **Turbovision CLI**, hosted on **Hugging Face Hub**.
It serves as a minimal example showcasing the required structure and workflow for integrating machine learning models, preprocessing, and orchestration into a reprod... | [] |
manancode/opus-mt-ja-de-ctranslate2-android | manancode | 2025-08-11T17:07:46Z | 1 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] | translation | 2025-08-11T17:07:19Z | # opus-mt-ja-de-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-ja-de` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-ja-de
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by*... | [] |
mradermacher/nexus-1.5b-GGUF | mradermacher | 2026-05-04T17:32:54Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"math",
"reasoning",
"reinforcement-learning",
"qwen2",
"mathematics",
"chain-of-thought",
"en",
"zh",
"base_model:Dat1710/nexus-1.5b",
"base_model:quantized:Dat1710/nexus-1.5b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | reinforcement-learning | 2026-05-04T06:07:45Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
CallMcMargin/Hypnos-i1-8B-mlx-bf16 | CallMcMargin | 2025-11-26T19:08:26Z | 15 | 1 | mlx | [
"mlx",
"safetensors",
"llama",
"reasoning",
"mathematics",
"logic",
"chain-of-thought",
"quantum",
"physics",
"llama-3",
"gguf",
"text-generation-inference",
"chatml",
"roleplaying",
"conversational",
"synthetic data",
"arxiv:2408.11857",
"text-generation",
"en",
"dataset:open-... | text-generation | 2025-11-26T19:07:21Z | # CallMcMargin/Hypnos-i1-8B-mlx-bf16
This model [CallMcMargin/Hypnos-i1-8B-mlx-bf16](https://huggingface.co/CallMcMargin/Hypnos-i1-8B-mlx-bf16) was
converted to MLX format from [squ11z1/Hypnos-i1-8B](https://huggingface.co/squ11z1/Hypnos-i1-8B)
using mlx-lm version **0.28.3**.
## Use with mlx
```bash
pip install mlx... | [] |
HashiruGunathilake/distilbert-base-uncased-finetuned-emotions | HashiruGunathilake | 2025-09-21T00:43:12Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:dair-ai/emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
... | text-classification | 2025-09-11T07:26:38Z | # distilbert-base-uncased-finetuned-emotions
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on [dair-ai/emotion dataset](https://huggingface.co/datasets/dair-ai/emotion).
It achieves the following results on the evaluation set:
- Loss: 0.1970
- Accuracy:... | [] |
PrazNeuro/PRECISE_GBM | PrazNeuro | 2026-01-06T11:21:27Z | 0 | 0 | null | [
"biology",
"cancer",
"glioblastoma",
"brain",
"multimodal",
"radiogenomics",
"radiomics",
"immune",
"classifier",
"image-classification",
"en",
"license:mit",
"region:us"
] | image-classification | 2026-01-06T10:38:18Z | <p align="center"> <b> Predictive Radiomics for Evaluation of Cancer Immune SignaturE in Glioblastoma | PRECISE-GBM </b> </p>
<p align="center">
<img src="PRECISE-GBM_GUI_logo%20(1).png" alt="PRECISE-GBM Logo">
</p>
[](https://opensource.org/licens... | [] |
williamanderson/Professional-Develop-VMware-Spring-2V0-72.22-Dumps-Questions-and-Answers | williamanderson | 2025-09-04T10:17:14Z | 0 | 0 | null | [
"region:us"
] | null | 2025-09-04T10:13:38Z | <p>The exam objectives are different for every single exam and usually provided by the certification provider. These normally tell the test taker what subjects are relevant, what they need to know, and why the exam seeks to cover these topics. It’s important to find them out for your specific exam. This can be fo... | [] |
azkamannan2004/MindEase-100K-Real | azkamannan2004 | 2026-03-23T16:59:48Z | 35 | 0 | null | [
"safetensors",
"blenderbot",
"generated_from_trainer",
"base_model:azkamannan2004/MindEase-90K-Real",
"base_model:finetune:azkamannan2004/MindEase-90K-Real",
"license:apache-2.0",
"region:us"
] | null | 2026-03-23T15:24:28Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MindEase-100K-Real
This model is a fine-tuned version of [azkamannan2004/MindEase-90K-Real](https://huggingface.co/azkamannan2004... | [
{
"start": 190,
"end": 208,
"text": "MindEase-100K-Real",
"label": "training method",
"score": 0.8048915863037109
},
{
"start": 264,
"end": 281,
"text": "MindEase-90K-Real",
"label": "training method",
"score": 0.7778043746948242
}
] |
jynly/gemma-1b-merge-slerp | jynly | 2026-04-05T06:47:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"mergekit",
"merge",
"base_model:aarnav11/gemma_1b_cares18k",
"base_model:merge:aarnav11/gemma_1b_cares18k",
"base_model:matheusfarocha/gemini-3-1b-it-wildjailbreak",
"base_model:merge:matheusfarocha/gemini-3-1b-it-wildjailbreak",
... | text-generation | 2026-04-05T06:46:51Z | # slerp
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [aarn... | [
{
"start": 674,
"end": 679,
"text": "slerp",
"label": "training method",
"score": 0.7203196883201599
}
] |
CMSManhattan/JiRack_GPT5_236b | CMSManhattan | 2025-12-23T00:06:22Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-12-22T19:52:39Z | # JiRack Dense: Ultra-Scale Transformer Architecture (140B - 405B+)
**Author:** Konstantin Vladimirovich Grabko
**Organization:** CMS Manhattan
**Status:** Patent Pending / Proprietary Technology
**Version:** 1.2 (Dense High-Precision Edition)
---
# JiRack GPT 5 class
## 🚀 Overview
JiRack Dense is a high-pe... | [] |
CMU-POPE/Meta-Llama-3-8B-Instruct_Mixture-of-Thoughts-all-4k-without_reasoning | CMU-POPE | 2025-08-17T20:12:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"dataset:CohenQu/Mixture-of-Thoughts-all-4k-without_reasoning",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"tex... | text-generation | 2025-08-17T04:40:56Z | # Model Card for Meta-Llama-3-8B-Instruct_Mixture-of-Thoughts-all-4k-without_reasoning
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the [CohenQu/Mixture-of-Thoughts-all-4k-without_reasoning](https://huggingface.co/datasets/Co... | [] |
NikolayKozloff/YanoljaNEXT-Rosetta-12B-Q5_K_S-GGUF | NikolayKozloff | 2025-09-03T14:07:18Z | 3 | 1 | transformers | [
"transformers",
"gguf",
"translation",
"llama-cpp",
"gguf-my-repo",
"en",
"es",
"fr",
"de",
"pt",
"ja",
"ko",
"zh",
"ar",
"ru",
"hi",
"base_model:yanolja/YanoljaNEXT-Rosetta-12B",
"base_model:quantized:yanolja/YanoljaNEXT-Rosetta-12B",
"license:gemma",
"endpoints_compatible",
... | translation | 2025-09-03T14:06:43Z | # NikolayKozloff/YanoljaNEXT-Rosetta-12B-Q5_K_S-GGUF
This model was converted to GGUF format from [`yanolja/YanoljaNEXT-Rosetta-12B`](https://huggingface.co/yanolja/YanoljaNEXT-Rosetta-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [origina... | [] |
ali-elganzory/SmolLM2-1.7B-DPO-Tulu3-decontaminated | ali-elganzory | 2026-01-26T09:03:52Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:ali-elganzory/SmolLM2-1.7B-SFT-Tulu3-decontaminated",
"base_model:finetune:ali-elganzory/SmolLM2-1.7B-SFT-Tulu3-decontaminated",
"text-generation... | text-generation | 2026-01-26T09:03:13Z | # Model Card for ali-elganzory-SmolLM2-1.7B-SFT-Tulu3-decontaminated_tulu-3-8b-preference-mixture-decontaminated_GH200
This model is a fine-tuned version of [ali-elganzory/SmolLM2-1.7B-SFT-Tulu3-decontaminated](https://huggingface.co/ali-elganzory/SmolLM2-1.7B-SFT-Tulu3-decontaminated).
It has been trained using [TRL]... | [
{
"start": 316,
"end": 319,
"text": "TRL",
"label": "training method",
"score": 0.7198435068130493
},
{
"start": 827,
"end": 830,
"text": "DPO",
"label": "training method",
"score": 0.7868609428405762
},
{
"start": 1123,
"end": 1126,
"text": "DPO",
"la... |
achiepatricia/han-decentralized-incentive-alignment-model-v1 | achiepatricia | 2026-02-25T14:17:44Z | 0 | 0 | null | [
"humanoid",
"incentive-modeling",
"decentralized-ai",
"coordination",
"token-economics",
"en",
"license:mit",
"region:us"
] | null | 2026-02-25T14:17:07Z | # Humanoid Decentralized Incentive Alignment Model
This model aligns behavioral incentives
across decentralized humanoid agents
through performance-weighted reward modeling
and cooperative equilibrium optimization.
It ensures that distributed agents
act in alignment with network-wide objectives
without centralized en... | [] |
juyoungggg/smolvla-0408-drawer-empty-grad-clip | juyoungggg | 2026-04-28T05:13:52Z | 31 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:juyoungggg/0408-drawer-empty",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-23T19:32:08Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
lokinfey/FunctionGemma-270m-ONNX-CPU | lokinfey | 2026-01-16T13:17:02Z | 0 | 0 | null | [
"onnx",
"license:mit",
"region:us"
] | null | 2026-01-16T13:02:40Z | ## FunctionGemma-270m-ONNX-CPU
This is a quantized FP32 model based on X86 CPU FunctionGemma-270m. You can deploy it on your CPU devices.
Note: This is unoffical version,just for test and dev.
### Installation
```bash
pip install onnxruntime-genai
```
### Running
```Python
import onnxruntime_genai as og
imp... | [] |
GMorgulis/gemma-3-4b-it-tiger-alpha-135-layer15-end-ft0.42 | GMorgulis | 2025-12-14T19:45:45Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-4b-it",
"base_model:finetune:google/gemma-3-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-12-14T18:39:04Z | # Model Card for gemma-3-4b-it-tiger-alpha-135-layer15-end-ft0.42
This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If ... | [] |
Robertp423/Qwen3-32B-Aevum-Merged-Q6_K-GGUF | Robertp423 | 2025-10-14T15:36:22Z | 2 | 0 | null | [
"gguf",
"unsloth",
"llama-cpp",
"gguf-my-repo",
"base_model:Robertp423/Qwen3-32B-Aevum-Merged",
"base_model:quantized:Robertp423/Qwen3-32B-Aevum-Merged",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-10-14T15:34:24Z | # Robertp423/Qwen3-32B-Aevum-Merged-Q6_K-GGUF
This model was converted to GGUF format from [`Robertp423/Qwen3-32B-Aevum-Merged`](https://huggingface.co/Robertp423/Qwen3-32B-Aevum-Merged) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original m... | [] |
surindersinghssj/surt-small-v1-training | surindersinghssj | 2026-04-06T08:58:06Z | 153 | 0 | null | [
"safetensors",
"whisper",
"automatic-speech-recognition",
"gurbani",
"punjabi",
"gurmukhi",
"training-checkpoint",
"pa",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"region:us"
] | automatic-speech-recognition | 2026-04-04T16:38:36Z | # Surt Small v1 — Training Checkpoints
This repo contains the **best training checkpoint** for the [Surt Small v1](https://huggingface.co/surindersinghssj/surt-small-v1) Gurbani ASR model.
## Current Checkpoint
| Parameter | Value |
|-----------|-------|
| **Step** | 3400 / 5000 |
| **WER** | **14.88%** |
| **CER** ... | [
{
"start": 2,
"end": 12,
"text": "Surt Small",
"label": "training method",
"score": 0.7158442735671997
},
{
"start": 101,
"end": 111,
"text": "Surt Small",
"label": "training method",
"score": 0.706455647945404
}
] |
MurrellLab/LaProteina.jl | MurrellLab | 2026-04-18T15:44:32Z | 0 | 0 | null | [
"arxiv:2507.09466",
"region:us"
] | null | 2026-02-22T09:21:55Z | # La-Proteina SafeTensors Weights
These are the pretrained weights for [La-Proteina](https://github.com/NVIDIA-Digital-Bio/la-proteina),
converted from the original PyTorch `.ckpt` checkpoints to SafeTensors format.
## Source
- **Original repository**: https://github.com/NVIDIA-Digital-Bio/la-proteina
- **Paper**: h... | [] |
Klein2303/speecht5_finetuned_voxpopuli_en | Klein2303 | 2025-10-29T13:39:19Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2025-10-28T18:05:52Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_en
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/s... | [] |
wikilangs/srn | wikilangs | 2026-01-10T22:33:39Z | 0 | 0 | wikilangs | [
"wikilangs",
"nlp",
"tokenizer",
"embeddings",
"n-gram",
"markov",
"wikipedia",
"feature-extraction",
"sentence-similarity",
"tokenization",
"n-grams",
"markov-chain",
"text-mining",
"fasttext",
"babelvec",
"vocabulous",
"vocabulary",
"monolingual",
"family-germanic_west_anglofri... | text-generation | 2026-01-10T22:33:23Z | # Sranan Tongo - Wikilangs Models
## Comprehensive Research Report & Full Ablation Study
This repository contains NLP models trained and evaluated by Wikilangs, specifically on **Sranan Tongo** Wikipedia data.
We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and word embeddings.
## 📋 Repos... | [] |
s98s86/bert-finetuned-nerdjk | s98s86 | 2025-09-03T09:12:27Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-09-03T09:11:52Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-nerdjk
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conl... | [] |
mradermacher/FACT-1-GGUF | mradermacher | 2025-09-09T18:19:17Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:joel-crasto/TEXT-01",
"base_model:quantized:joel-crasto/TEXT-01",
"endpoints_compatible",
"region:us"
] | null | 2025-09-01T22:55:18Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
Brooooooklyn/Qwen3.5-27B-UD-Q6_K_XL-mlx | Brooooooklyn | 2026-03-29T15:55:10Z | 0 | 1 | mlx-node | [
"mlx-node",
"safetensors",
"qwen3_5",
"mlx",
"quantized",
"awq",
"6-bit",
"qwen3.5",
"hybrid-attention",
"gated-delta-net",
"apple-silicon",
"unsloth-dynamic",
"text-generation",
"conversational",
"en",
"zh",
"base_model:Qwen/Qwen3.5-27B",
"base_model:quantized:Qwen/Qwen3.5-27B",
... | text-generation | 2026-03-29T15:29:08Z | # Qwen3.5-27B — UD-Q6_K_XL (mlx-node)
6-bit base mixed-precision quantization of [Qwen/Qwen3.5-27B](https://huggingface.co/Qwen/Qwen3.5-27B) for Apple Silicon via [mlx-node](https://github.com/mlx-node/mlx-node).
| | Original (BF16) | This Model |
|---|---|---|
| **Size** | ~51 GB | **27 GB** |
| **Precision** | BF16 u... | [] |
onnx-community/rugpt3medium_based_on_gpt2-ONNX | onnx-community | 2026-01-10T12:12:28Z | 5 | 0 | transformers.js | [
"transformers.js",
"onnx",
"gpt2",
"text-generation",
"PyTorch",
"Transformers",
"ru",
"arxiv:2309.10931",
"base_model:ai-forever/rugpt3medium_based_on_gpt2",
"base_model:quantized:ai-forever/rugpt3medium_based_on_gpt2",
"region:us"
] | text-generation | 2026-01-10T12:11:42Z | # rugpt3medium_based_on_gpt2 (ONNX)
This is an ONNX version of [ai-forever/rugpt3medium_based_on_gpt2](https://huggingface.co/ai-forever/rugpt3medium_based_on_gpt2). It was automatically converted and uploaded using [this Hugging Face Space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
## Usage wi... | [] |
majentik/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-TurboQuant-GGUF-Q2_K | majentik | 2026-05-04T12:34:14Z | 0 | 0 | null | [
"gguf",
"nemotron",
"multimodal",
"mamba2",
"moe",
"quantized",
"turboquant",
"base_model:nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16",
"base_model:quantized:nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-05-04T12:33:47Z | # Nemotron-3-Nano-Omni-30B-A3B-Reasoning - TurboQuant GGUF Q2_K
GGUF Q2_K quantization of `Nemotron-3-Nano-Omni-30B-A3B-Reasoning` (`nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16`) with TurboQuant weight method.
The `Q2_K.gguf` binary in this repo is loaded by `llama.cpp` / `llama-mtmd-cli`.
For multimodal infer... | [] |
nithishbasireddy/el-defect-training | nithishbasireddy | 2026-04-23T10:32:27Z | 0 | 0 | null | [
"region:us"
] | null | 2026-04-23T06:36:43Z | # EL Defect Detection — Training Package
Train a **U-Net++ with EfficientNet-B4 encoder** for solar cell EL defect segmentation.
## Quick Start (RTX 4060 / any CUDA GPU)
```bash
# 1. Clone this repo
git clone https://huggingface.co/nithishbasireddy/el-defect-training
cd el-defect-training
# 2. Install dependencies
... | [
{
"start": 798,
"end": 804,
"text": "E-SCDD",
"label": "training method",
"score": 0.7058756351470947
}
] |
HolSoul/YandexGPT-5-Lite-8B-stomatology-patient_ver2_7ep | HolSoul | 2025-12-23T12:19:49Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:yandex/YandexGPT-5-Lite-8B-instruct",
"base_model:finetune:yandex/YandexGPT-5-Lite-8B-instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-12-23T10:05:29Z | # Model Card for YandexGPT-5-Lite-8B-stomatology-patient_ver2_7ep
This model is a fine-tuned version of [yandex/YandexGPT-5-Lite-8B-instruct](https://huggingface.co/yandex/YandexGPT-5-Lite-8B-instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers im... | [] |
Carmendlr/clasificador-sst5 | Carmendlr | 2025-12-15T13:55:39Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"electra",
"text-classification",
"sst5",
"sentiment-analysis",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"base_model:finetune:mrm8488/electricidad-base-discriminator",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-12-15T13:54:50Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-sst5
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/... | [] |
seeingterra/Magistaroth-24B-v1.1-Q3_K_M-GGUF | seeingterra | 2026-03-02T11:26:05Z | 21 | 0 | transformers | [
"transformers",
"gguf",
"pdq",
"merge",
"mergekit",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:OccultAI/illuminati_imatrix_v1",
"base_model:DarkArtsForge/Magistaroth-24B-v1.1",
"base_model:quantized:DarkArtsForge/Magistaroth-24B-v1.1",
"license:apache-2.0",
"endpoints_compatible",
"region... | null | 2026-03-02T11:25:10Z | # seeingterra/Magistaroth-24B-v1.1-Q3_K_M-GGUF
This model was converted to GGUF format from [`DarkArtsForge/Magistaroth-24B-v1.1`](https://huggingface.co/DarkArtsForge/Magistaroth-24B-v1.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [origina... | [] |
lejelly/deepseek-7b-math-code-lambda075 | lejelly | 2026-02-12T12:13:26Z | 2 | 0 | null | [
"safetensors",
"llama",
"model-merge",
"hermite-interpolation",
"deepseek",
"base_model:deepseek-ai/deepseek-coder-7b-instruct-v1.5",
"base_model:finetune:deepseek-ai/deepseek-coder-7b-instruct-v1.5",
"region:us"
] | null | 2026-02-12T12:11:09Z | # deepseek-7b-math-code-lambda075
2モデルの線形補間マージモデル。
## Merge Configuration
| Parameter | Value |
|-----------|-------|
| Model A | `deepseek-ai/deepseek-math-7b-instruct` |
| Model B | `deepseek-ai/deepseek-coder-7b-instruct-v1.5` |
| λ_a | 0.75 |
| λ_b | 0.25 |
| Formula | θ* = 0.75 × θ_a + 0.25 × θ_b |
| dtype | to... | [] |
jz666/simpo-train-largest-30-ppl-rejected | jz666 | 2025-10-14T13:29:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"alignment-handbook",
"trl",
"simpo",
"generated_from_trainer",
"conversational",
"dataset:jz666/gemma2-ultrafeedback-ppl-split",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"license:gemma",
"tex... | text-generation | 2025-10-14T12:59:15Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# simpo-train-largest-30-ppl-rejected
This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/ge... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.