modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
mradermacher/tvall43-Qwen3.5-0.8B-heretic-v3-GGUF | mradermacher | 2026-04-05T20:13:51Z | 952 | 0 | transformers | [
"transformers",
"gguf",
"heretic",
"uncensored",
"decensored",
"abliterated",
"en",
"base_model:CCSSNE/tvall43-Qwen3.5-0.8B-heretic-v3",
"base_model:quantized:CCSSNE/tvall43-Qwen3.5-0.8B-heretic-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-24T16:39:13Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: 1 -->
static ... | [] |
AlignmentResearch/obfuscation-atlas-gemma-3-12b-it-kl0.1-det10-seed2-diverse_deception_probe | AlignmentResearch | 2026-02-20T21:59:21Z | 1 | 0 | peft | [
"peft",
"deception-detection",
"rlvr",
"alignment-research",
"obfuscation-atlas",
"lora",
"model-type:honest",
"arxiv:2602.15515",
"base_model:google/gemma-3-12b-it",
"base_model:adapter:google/gemma-3-12b-it",
"license:mit",
"region:us"
] | null | 2026-02-16T09:29:36Z | # RLVR-trained policy from The Obfuscation Atlas
This is a policy trained on MBPP-Honeypot with deception probes,
from the [Obfuscation Atlas paper](https://arxiv.org/abs/2602.15515),
uploaded for reproducibility and further research.
The training code and RL environment are available at: https://github.com/Alignment... | [] |
zekaemo/Indobert-Sentiment-Analysis-with-Bayes-Optimization | zekaemo | 2025-08-18T13:06:30Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:indobenchmark/indobert-base-p2",
"base_model:finetune:indobenchmark/indobert-base-p2",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-08-18T11:55:03Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Indobert-Sentiment-Analysis-with-Bayes-Optimization
This model is a fine-tuned version of [indobenchmark/indobert-base-p2](https:... | [] |
neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_prover0_1_0_iter_2_prover0_175607 | neural-interactive-proofs | 2025-08-25T00:11:28Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-25T00:10:43Z | # Model Card for finetune_dpo_qwen2_5-32b-instruct_cv_qwen2.5_32B_prover_debate_prover0_1_0_iter_2_prover0_175607
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
``... | [] |
LirihSetyo/results-mbg | LirihSetyo | 2025-08-16T08:00:16Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:indobenchmark/indobert-base-p2",
"base_model:finetune:indobenchmark/indobert-base-p2",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-08-12T03:34:31Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results-mbg
This model is a fine-tuned version of [indobenchmark/indobert-base-p2](https://huggingface.co/indobenchmark/indobert-... | [] |
etwithin/pytorch-scanner-bypass-poc | etwithin | 2026-03-06T14:46:40Z | 0 | 0 | null | [
"region:us"
] | null | 2026-03-06T14:46:37Z | # PyTorch Scanner Bypass PoC
## Vulnerability
Malicious `.pt` file achieves Arbitrary Code Execution via `torch.load()`,
while bypassing both picklescan 1.0.4 and modelscan 0.8.8.
## Technique
Uses `marshal.loads` + `types.FunctionType` + `importlib.import_module` chain.
All three globals are Suspicious (not Dangerou... | [] |
priorcomputers/phi-3.5-mini-instruct-cn-minimal-kr0.1-a0.075-creative | priorcomputers | 2026-02-02T16:58:15Z | 0 | 0 | null | [
"safetensors",
"phi3",
"creativityneuro",
"llm-creativity",
"mechanistic-interpretability",
"custom_code",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:finetune:microsoft/Phi-3.5-mini-instruct",
"license:apache-2.0",
"region:us"
] | null | 2026-02-02T16:56:52Z | # phi-3.5-mini-instruct-cn-minimal-kr0.1-a0.075-creative
This is a **CreativityNeuro (CN)** modified version of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct).
## Model Details
- **Base Model**: microsoft/Phi-3.5-mini-instruct
- **Modification**: CreativityNeuro weight scal... | [] |
suaybdgns/elektrik-uzmani-llama3 | suaybdgns | 2026-01-22T11:26:45Z | 4 | 0 | null | [
"gguf",
"electrical-engineering",
"tedaş",
"turkish",
"llama-3",
"fine-tuned",
"unsloth",
"tr",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2026-01-21T16:16:11Z | # ⚡ Elektrik Uzmanı Llama-3 (8B) - Fine-Tuned with Unsloth
Bu model, elektrik dağıtım şebekeleri, TEDAŞ standartları ve transformatör işletme prensipleri üzerine **Unsloth** kütüphanesi kullanılarak **fine-tune** edilmiş, yüksek doğruluk oranına sahip bir yardımcı dil modelidir.
## 📝 Proje Hakkında
Bu çalışma, Llama... | [
{
"start": 51,
"end": 58,
"text": "Unsloth",
"label": "training method",
"score": 0.7078586220741272
},
{
"start": 165,
"end": 172,
"text": "Unsloth",
"label": "training method",
"score": 0.8020384907722473
},
{
"start": 657,
"end": 664,
"text": "Unsloth",... |
citrinegui/Qwen2.5-3B-Instruct_countdown2345_grpo_vrex_0.5_0.5_SEC1.0DRO0.0G0.0_minp0.0_1600 | citrinegui | 2025-09-24T15:09:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"dataset:countdown-dataset",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"text-generation-inference",
"endpoi... | text-generation | 2025-09-24T03:28:58Z | # Model Card for Qwen2.5-3B-Instruct_countdown2345_grpo_vrex_0.5_0.5_SEC1.0DRO0.0G0.0_minp0.0_1600
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the [countdown-dataset](https://huggingface.co/datasets/countdown-dataset) dataset.
It has been trained... | [] |
groderg/DroneDinov2-large-2025_11_09_28864-bs32_freeze_probs | groderg | 2025-11-09T08:18:49Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"dinov2",
"multilabel-image-classification",
"multilabel",
"generated_from_trainer",
"eng",
"license:cc0-1.0",
"region:us"
] | null | 2025-11-09T07:01:10Z | ---
language:
- eng
license: cc0-1.0
tags:
- multilabel-image-classification
- multilabel
- generated_from_trainer
base_model: DroneDinov2-large-2025_11_09_28864-bs32_freeze_probs
model-index:
- name: DroneDinov2-large-2025_11_09_28864-bs32_freeze_probs
results: []
---
DroneDinov2 is a fine-tuned version of [DroneDi... | [
{
"start": 789,
"end": 793,
"text": "ReLU",
"label": "training method",
"score": 0.8175352215766907
}
] |
chris241094/smolVLA-v00 | chris241094 | 2026-03-19T14:15:28Z | 36 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:chris241094/record-level1-3",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-19T14:14:56Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
Muapi/randommaxx-animefy | Muapi | 2025-08-19T09:25:45Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T09:25:01Z | # RandomMaxx Animefy

**Base model**: Flux.1 D
**Trained words**:
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "ap... | [] |
reEtym/reEtym | reEtym | 2026-04-13T20:20:21Z | 0 | 0 | pytorch | [
"pytorch",
"text-generation",
"interpretability",
"etymology",
"feature-disentanglement",
"causal-lm",
"en",
"dataset:Skylion007/openwebtext",
"doi:10.57967/hf/8378",
"license:mit",
"model-index",
"region:us"
] | text-generation | 2026-04-12T15:40:09Z | # reEtym
[ [中文](README_CN.md) | English ]
**A Metal Soul In My Hand** — An interpretability-native feature-disentangled Transformer architecture.
[](https://doi.org/10.5281/zenodo.19556768)
Built on the hypothesis that human language is composed of f... | [] |
zizi-0123/mhqa_qwen_sft_behavior | zizi-0123 | 2025-11-07T18:35:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-11-07T18:34:16Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mhqa_qwen3_1.7b_sft_behavior
This model is a fine-tuned version of [Qwen3-1.7B](https://huggingface.co/Qwen3-1.7B) on the deep_re... | [] |
vivekdhayaal/vit-cvt-cifar10-experiments | vivekdhayaal | 2026-04-19T10:19:55Z | 0 | 0 | null | [
"computer-vision",
"vision-transformer",
"cvt",
"cifar-10",
"pytorch",
"en",
"region:us"
] | null | 2026-04-18T18:32:52Z | # Vision Transformer & Convolutional ViT on CIFAR-10
This repository contains the experimental results, training curves, and attention map visualizations for a custom implementation of a Vision Transformer (ViT) and Convolutional Vision Transformer (CvT) trained from scratch on CIFAR-10.
## Directory Structure
- **`c... | [] |
xistoh162108/bge-m3-kaist-v1 | xistoh162108 | 2026-01-01T11:46:39Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"dense",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2026-01-01T11:36:49Z | # SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Mo... | [] |
Ekliipce/wearit-garment-mask | Ekliipce | 2025-11-15T12:21:00Z | 17 | 0 | transformers | [
"transformers",
"garment-mask-generation",
"image-segmentation",
"image-inpainting",
"fashion",
"garment-mask",
"densepose",
"human-parsing",
"pytorch",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2025-11-14T14:11:32Z | # WearIT Garment Mask Generation
## Model Description
**WearIT Garment Mask** is a specialized image segmentation pipeline for generating precise garment masks suitable for virtual try-on and image inpainting applications. The model combines three state-of-the-art computer vision models to create intelligent, variabl... | [] |
PhTae/MolBridge-Gen-Base-C2S | PhTae | 2025-10-31T04:23:11Z | 0 | 0 | null | [
"safetensors",
"t5",
"chemistry",
"en",
"arxiv:2510.26157",
"base_model:PhTae/MolBridge-Gen-Base",
"base_model:finetune:PhTae/MolBridge-Gen-Base",
"license:apache-2.0",
"region:us"
] | null | 2025-10-28T07:05:40Z | **EMNLP 2025 main**
"Bridging the Gap Between Molecule and Textual Descriptions via Substructure-aware Alignment"
[GitHub](https://github.com/Park-ing-lot/MolBridge)
[Paper](https://arxiv.org/abs/2510.26157)
This model is trained on ChEBI-20 dataset.
```python
from transformers import AutoTokenizer, T5ForCondition... | [] |
K1mG0ng/AI-taste-communication-4B | K1mG0ng | 2026-05-01T16:57:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"causal-lm",
"social-science",
"research-evaluation",
"fine-tuned",
"conversational",
"en",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"r... | text-generation | 2026-05-01T08:10:31Z | # AI-Taste-Communication-4B
This repository provides a fine-tuned Qwen3 4B model for AI Taste experiments on social science research articles, with a current focus on Communication article evaluation.
## Model Summary
- Base model: `Qwen/Qwen3-4B`
- Architecture: `Qwen3ForCausalLM`
- Format: Hugging Face Transformer... | [] |
unionpoint/vit_small_plus_patch16_dinov3.ft_plantdoc_384 | unionpoint | 2026-04-19T22:12:02Z | 0 | 0 | timm | [
"timm",
"image-classification",
"transformers",
"dataset:lvd-1689m",
"arxiv:2508.10104",
"arxiv:2010.11929",
"license:cc-by-nc-4.0",
"region:us"
] | image-classification | 2026-04-19T22:10:46Z | # Model card for vit_small_plus_patch16_dinov3.ft_plantdoc_384
## Overview
This model classifies diseases from plant images.
- Dataset size: 8000 images
- Number of classes: 39
- Architecture: DINOv3 ViT Small (384)
## Metrics
- mAP: **0.91**
- Accuracy: **0.83**
## Model
## Model Details
- **Model Type:** Image cl... | [] |
SandLogicTechnologies/Hermes-2-Pro-Llama-3-8B-GGUF | SandLogicTechnologies | 2025-09-29T11:46:54Z | 22 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"llama3",
"DPO",
"RLHF",
"Function calling",
"Quantized",
"text-generation",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:NousResearch/Hermes-2-Pro-Llama-3-8B",
"base_model:quantized:NousResearch/Hermes-2-Pro-Llama-3-8B",
"license:llama3",
"endpoin... | text-generation | 2025-09-29T10:59:58Z | # Quantized Hermes 2 Pro Models
This repository provides quantized GGUF versions of Hermes 2 Pro model. Hermes 2 Pro is an upgraded version of Nous Hermes 2, trained on a cleaned OpenHermes 2.5
dataset plus a new in-house Function Calling and JSON Mode dataset. These 4-bit and 5-bit quantized variants retain the orig... | [] |
madout/jarvesv1 | madout | 2025-09-18T02:59:33Z | 0 | 1 | null | [
"text-to-speech",
"ja",
"en",
"arxiv:2509.06942",
"base_model:microsoft/VibeVoice-1.5B",
"base_model:finetune:microsoft/VibeVoice-1.5B",
"license:mit",
"region:us"
] | text-to-speech | 2025-09-18T02:56:31Z | ---
license: mit
------
license: mit
------
license: mit
------
license: mit
------
license: mit
------
license: mit
------
license: mit
------
license: mit
------
license: mit
------
license: mit
------
license: mit
------
license: mit
------
license: mit
------
license: mit
------
license: mit
------
license: mit
---... | [] |
contemmcm/fe77ea014d611556523ecc53f12422e8 | contemmcm | 2025-10-31T06:59:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/mbart-large-cc25",
"base_model:finetune:facebook/mbart-large-cc25",
"endpoints_compatible",
"region:us"
] | null | 2025-10-31T06:42:30Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fe77ea014d611556523ecc53f12422e8
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/faceboo... | [] |
keras/qwen3_1.7b_en | keras | 2026-02-26T22:36:44Z | 18 | 0 | keras-hub | [
"keras-hub",
"text-generation",
"region:us"
] | text-generation | 2025-08-29T23:12:46Z | ### Model Overview
# Model Summary
Qwen is the large language model and large multimodal model series of the Qwen Team, Alibaba Group. Both language models and multimodal models are pretrained on large-scale multilingual and multimodal data and post-trained on quality data for aligning to human preferences. Qwen is ca... | [] |
deyucao/qwen3-4b-agent-trajectory-lora_2026022706 | deyucao | 2026-02-27T07:11:08Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen3",
"lora",
"agent",
"tool-use",
"alfworld",
"dbbench",
"text-generation",
"conversational",
"en",
"dataset:u-10bei/sft_alfworld_trajectory_dataset_v5",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache... | text-generation | 2026-02-27T07:10:02Z | # qwen3-4b-agent-trajectory-lora_2026022706
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve... | [
{
"start": 74,
"end": 78,
"text": "LoRA",
"label": "training method",
"score": 0.9036080241203308
},
{
"start": 145,
"end": 149,
"text": "LoRA",
"label": "training method",
"score": 0.9271915555000305
},
{
"start": 191,
"end": 195,
"text": "LoRA",
"lab... |
allenai/HiRO-ACE | allenai | 2026-01-25T20:52:25Z | 2 | 14 | fme | [
"fme",
"arxiv:2512.18224",
"license:apache-2.0",
"region:us"
] | null | 2026-01-12T20:43:20Z | <img src="ACE-logo.png" alt="Logo for the ACE Project" style="width: auto; height: 50px;">
# HiRO-ACE
The HiRO-ACE framework enables efficient generation of 3 km precipitation fields over decades of simulated climate and arbitrary regions of the globe.
HiRO (High Resolution Output) is a diffusion model which generate... | [] |
DMIR01/DMRetriever-4B-PT | DMIR01 | 2025-10-23T22:53:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"information-retrieval",
"LLM",
"Embedding",
"text-retrieval",
"disaster-management",
"en",
"arxiv:2510.15087",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-retrieval | 2025-10-23T19:18:20Z | This model is trained through the approach described in [DMRetriever: A Family of Models for Improved Text Retrieval in Disaster Management](https://www.arxiv.org/abs/2510.15087).
The associated GitHub repository is available [here](https://github.com/KaiYin97/DMRETRIEVER).
This model has 4B parameters and it is the p... | [] |
mradermacher/OpenStar-13b-GGUF | mradermacher | 2025-08-27T10:41:37Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:NewstaR/OpenStar-13b",
"base_model:quantized:NewstaR/OpenStar-13b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-27T09:42:56Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
sdfprotocol/sdf-extract | sdfprotocol | 2026-02-09T20:17:55Z | 2 | 1 | null | [
"gguf",
"sdf",
"extraction",
"smollm3",
"structured-data",
"web-content",
"text-generation",
"en",
"base_model:HuggingFaceTB/SmolLM3-3B",
"base_model:quantized:HuggingFaceTB/SmolLM3-3B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2026-02-09T19:02:29Z | # SDF Extract
Structured data extractor for the [SDF Protocol](https://sdfprotocol.org). Fine-tuned from SmolLM3-3B using QLoRA.
## Purpose
Extracts structured semantic data from web content: entities, claims, relationships, summaries, and type-specific fields. Takes the type classification from [sdf-classify](https... | [] |
letaldir/poca-SoccerTwos | letaldir | 2025-12-14T19:24:04Z | 20 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2025-12-14T17:44:28Z | # **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Document... | [] |
BootesVoid/cme8rr26s02afrts8cpzivtkn_cme8rw22c02bfrts8eqs5pzal_2 | BootesVoid | 2025-08-12T17:09:47Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-08-12T17:09:45Z | # Cme8Rr26S02Afrts8Cpzivtkn_Cme8Rw22C02Bfrts8Eqs5Pzal_2
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: http... | [] |
ConicCat/GLM-4.7-Architect-355B-A32B-LoRA | ConicCat | 2026-02-15T18:25:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation",
"en",
"base_model:zai-org/GLM-4.7",
"base_model:finetune:zai-org/GLM-4.7",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-15T18:22:05Z | # ConicCat/GLM-4.7-Architect-355B-A32B
<img src="The_Architect.png" alt="Big Bottom Text" width="1000" height="200">
A finetune of GLM-4.5 air to improve prose and writitquality and attempt to remove the bulk of glm-isms using a Gutenberg-like methodology.
No particular attempt was made to preverse thinking ability;... | [
{
"start": 231,
"end": 257,
"text": "Gutenberg-like methodology",
"label": "training method",
"score": 0.8843204379081726
}
] |
stevenbucaille/lwdetr_medium_30e_objects365 | stevenbucaille | 2026-01-13T20:25:16Z | 58 | 0 | transformers | [
"transformers",
"safetensors",
"lw_detr",
"object-detection",
"vision",
"dataset:coco",
"arxiv:2406.03459",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2025-09-21T04:42:32Z | # LW-DETR (Light-Weight Detection Transformer)
LW-DETR, a Light-Weight DEtection TRansformer model, is designed to be a real-time object detection alternative that outperforms conventional convolutional (YOLO-style) and earlier transformer-based (DETR) methods in terms of speed and accuracy trade-off. It was introduce... | [] |
asksolz/murmor-qwen3-1b7-4bit-mlx | asksolz | 2026-04-27T08:01:12Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-1.7B",
"base_model:quantized:Qwen/Qwen3-1.7B",
"license:apache-2.0",
"4-bit",
"region:us"
] | text-generation | 2026-04-27T06:22:56Z | # mlx-community/Qwen3-1.7B-4bit
This model [mlx-community/Qwen3-1.7B-4bit](https://huggingface.co/mlx-community/Qwen3-1.7B-4bit) was
converted to MLX format from [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B)
using mlx-lm version **0.24.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from m... | [] |
Alonto/rr3 | Alonto | 2026-03-03T07:45:42Z | 0 | 0 | null | [
"region:us"
] | null | 2026-03-03T07:35:41Z | # Rolling Slots Casino Promo Code CASH25NEW Get The Daily Cashback Bonus is calculated as 5% of the total deposits
This page explains how to use the Rolling Slots Casino promo code CASH25NEW and receive the Daily Cashback Bonus calculated as 5% of the total deposits. The offer is designed for players who want steady r... | [] |
fn-aka-mur/starter_sft_0030_upsample_lr1e5_2ep | fn-aka-mur | 2026-02-11T18:33:36Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-11T18:33:19Z | <【課題】ここは自分で記入して下さい>
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **structured ou... | [
{
"start": 121,
"end": 126,
"text": "QLoRA",
"label": "training method",
"score": 0.7974647283554077
}
] |
Mayank022/Gemma_3_270M_SLM_from_scratch | Mayank022 | 2025-08-26T00:20:05Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2025-08-26T00:16:52Z | # Gemma 3 270M: Small Language Model Implementation from Scratch
A complete PyTorch implementation of Google's Gemma 3 270M small language model, trained from scratch on the TinyStories dataset.
[Click to see Github Code](https://github.com/Mayankpratapsingh022/Deep-Learning-from-Scratch/tree/main/%5B7%5D%20Gemma_3_2... | [] |
juniorrios/llama-gpt2-finewebedu | juniorrios | 2026-02-17T18:01:15Z | 3 | 0 | null | [
"safetensors",
"llama_te",
"custom_code",
"region:us"
] | null | 2026-02-16T13:10:16Z | # llama-gpt2-finewebedu
Checkpoint exportado automaticamente de `out_fineweb-edu/best.pt`.
- step: 99200
- best_val_loss: 3.6788785457611084
- tokenizer: tiktoken `cl100k_base`
- carregamento:
- `AutoModelForCausalLM.from_pretrained(repo_id, trust_remote_code=True)`
- `AutoModel.from_pretrained(repo_id, trust_rem... | [] |
sbintuitions/sarashina1-65b | sbintuitions | 2024-06-27T06:56:36Z | 11 | 6 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"ja",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"deploy:azure",
"region:us"
] | text-generation | 2024-06-07T11:57:56Z | # Sarashina1-65B
This repository provides Japanese language models trained by [SB Intuitions](https://www.sbintuitions.co.jp/).
## How to use
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, set_seed
model = AutoModelForCausalLM.from_pretrained("sbintuitions/sarashina... | [] |
mradermacher/EMBGuard-8B-GGUF | mradermacher | 2026-01-21T06:37:32Z | 13 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:EMBGuard/EMBGuard-8B",
"base_model:quantized:EMBGuard/EMBGuard-8B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-01-21T06:25:52Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
timm/csatv2_21m.sw_r512_in1k | timm | 2025-12-30T00:19:54Z | 56 | 0 | timm | [
"timm",
"safetensors",
"image-classification",
"transformers",
"dataset:imagenet-1k",
"license:apache-2.0",
"region:us"
] | image-classification | 2025-12-30T00:19:47Z | # Model card for csatv2_21m.sw_r512_in1k
A CSATv2 image classification model pretrained with `timm` on ImageNet-1k by Ross Wightman.
## Model Details
- **Model Type:** Image Classification / Feature Encoder
- **Model Stats:**
- Params (M): 20.7
- GMACs: 2.9
- Activations (M): 15.8
- Image size: 512 x 512
- **... | [] |
GeniusJunP/fail_akinoTano-fix-7k | GeniusJunP | 2025-10-06T12:19:43Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:GeniusJunP/akinoTano-fix",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-10-03T11:28:36Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
optimum-intel-internal-testing/tiny-random-MiniCPM-o-2_6 | optimum-intel-internal-testing | 2025-10-21T10:00:39Z | 13,080 | 1 | null | [
"safetensors",
"minicpmo",
"custom_code",
"license:apache-2.0",
"region:us"
] | null | 2025-10-21T10:00:35Z | ```py
from transformers import AutoConfig, AutoModel, logging
from transformers import AutoModel, AutoTokenizer
import torch
from PIL import Image
import os
logging.set_verbosity_error() # silence HF info spam
MODEL_ID = "openbmb/MiniCPM-o-2_6"
device = "cpu"
cfg = AutoConfig.from_pretrained(MODEL_ID, trust_remote_... | [] |
Tsagkas/lafa_v0 | Tsagkas | 2026-03-03T17:17:30Z | 34 | 0 | lerobot | [
"lerobot",
"safetensors",
"lafa",
"robotics",
"dataset:Tsagkas/dataset_deltas_lvfm_v0",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-03T10:15:19Z | # Model Card for lafa
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.c... | [] |
mradermacher/diallm-gemma-grpo-all-GGUF | mradermacher | 2026-04-21T06:04:23Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"grpo",
"en",
"base_model:jordanpainter/diallm-gemma-grpo-all",
"base_model:quantized:jordanpainter/diallm-gemma-grpo-all",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-04-21T05:58:48Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: 1 -->
static ... | [] |
sunflowersea/biogpt-tac-task53 | sunflowersea | 2026-04-04T23:39:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"biogpt",
"token-classification",
"generated_from_trainer",
"base_model:sunflowersea/biogpt-ncbi-disease-ner-task2",
"base_model:finetune:sunflowersea/biogpt-ncbi-disease-ner-task2",
"license:mit",
"endpoints_compatible",
"region:us"
] | token-classification | 2026-04-04T22:52:46Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biogpt-tac-task53
This model is a fine-tuned version of [sunflowersea/biogpt-ncbi-disease-ner-task2](https://huggingface.co/sunfl... | [] |
Jalkey/my_awesome_model | Jalkey | 2026-01-12T01:20:38Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"re... | text-classification | 2026-01-12T01:20:00Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/dis... | [] |
alexander1i/lustify-sdxl-inpaint-endpoint | alexander1i | 2025-08-29T11:22:55Z | 67 | 0 | diffusers | [
"diffusers",
"safetensors",
"Safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionXLInpaintPipeline",
"region:us"
] | text-to-image | 2025-08-28T13:11:27Z | # LUSTIFY! [SDXL NSFW checkpoint]_v2.0 INPAINTING

### Description:
> None
### Civitai Page: https://civitai.com/models/715933
You can use this with the [🧨Diffusers library](https://github.com/huggingface/diffusers)
### Diffusers
```py
from diffusers import Stable... | [] |
devika-tiwari/gpt2_small_expandedbabyLM_100M_subj_10percent_44 | devika-tiwari | 2026-04-29T11:07:08Z | 49 | 0 | null | [
"pytorch",
"gpt2",
"generated_from_trainer",
"region:us"
] | null | 2026-04-22T18:48:13Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_small_expandedbabyLM_100M_subj_10percent_44
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown ... | [] |
nhonhoccode/qwen3-0-6b-cybersecqa-lora-8bit-20251111-1900 | nhonhoccode | 2025-11-11T19:01:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen",
"unsloth",
"cybersecurity",
"instruction-tuning",
"lora",
"kaggle",
"text-generation",
"conversational",
"en",
"dataset:zobayer0x01/cybersecurity-qa",
"base_model:unsloth/Qwen3-0.6B",
"base_model:adapter:unsloth/Qwen3-0.6B",
"license:apache-2.0",
... | text-generation | 2025-11-11T19:00:49Z | # qwen3-0-6b — Cybersecurity QA (LORA 8bit)
Fine-tuned on Kaggle using **LORA**. (Quant: LoRA + 8-bit (bnb int8))
### Model Summary
- Base: `unsloth/Qwen3-0.6B`
- Trainable params: **10,092,544** / total **606,142,464**
- Train wall time (s): 31939.3
- Files: adapter_model.safetensors + adapter_config.json (LoRA) + t... | [] |
fawazo/qwen2.5-coder-3b-pentest-gguf | fawazo | 2025-12-09T02:46:18Z | 176 | 1 | null | [
"gguf",
"llama.cpp",
"pentesting",
"cybersecurity",
"jetson",
"quantized",
"base_model:Qwen/Qwen2.5-Coder-3B",
"base_model:quantized:Qwen/Qwen2.5-Coder-3B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-09T02:45:13Z | # Qwen2.5-Coder-3B Pentest - GGUF
GGUF quantizations of [fawazo/qwen2.5-coder-3b-pentest](https://huggingface.co/fawazo/qwen2.5-coder-3b-pentest) optimized for **Jetson Orin Nano (8GB)**.
## Model Description
An AI pentesting assistant fine-tuned on 150K+ cybersecurity examples covering:
- OWASP Top 10 vulnerabiliti... | [] |
FAWAS97/bge-base-financial-matryoshka | FAWAS97 | 2025-09-15T11:01:56Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:360",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"model-index",
"text... | sentence-similarity | 2025-09-15T11:00:03Z | # SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained on the json dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Mo... | [] |
forkjoin-ai/llama-3.1-70b-instruct-gguf | forkjoin-ai | 2026-03-20T16:38:48Z | 31 | 0 | llama-cpp | [
"llama-cpp",
"gguf",
"forkjoin-ai",
"text-generation",
"en",
"base_model:meta-llama/Llama-3.1-70B-Instruct",
"base_model:quantized:meta-llama/Llama-3.1-70B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | 2026-03-09T21:49:05Z | # Llama 3.1 70B Instruct
Forkjoin.ai conversion of [meta-llama/Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct) to GGUF format for edge deployment.
## Model Details
- **Source Model**: [meta-llama/Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct)
- **F... | [] |
qualiaadmin/c43f005c-b373-4480-85c9-f889f2182914 | qualiaadmin | 2026-01-14T08:42:34Z | 1 | 0 | lerobot | [
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:qualiaadmin/oneepisode",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-14T08:42:14Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
lthn/lemma-mlx-8bit | lthn | 2026-04-11T18:48:14Z | 277 | 0 | mlx | [
"mlx",
"safetensors",
"gemma4",
"lemma",
"8bit",
"apple-silicon",
"multimodal",
"on-device",
"conversational",
"image-text-to-text",
"base_model:lthn/lemma",
"base_model:quantized:lthn/lemma",
"license:eupl-1.2",
"8-bit",
"region:us"
] | image-text-to-text | 2026-04-09T11:30:20Z | # Lemma — Gemma 4 E4B — MLX 8-bit
The mid-sized member of the Lemma model family by Lethean. An EUPL-1.2 fork of Gemma 4 E4B with the Lethean Ethical Kernel (LEK) merged into the weights.
This repo hosts the **MLX 8-bit** build for native Apple Silicon inference via [`mlx-lm`](https://github.com/ml-explore/mlx-lm) an... | [] |
iRanadheer/cards_qwen3.5_27b_norecot-lora | iRanadheer | 2026-05-02T20:04:25Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"hf_jobs",
"unsloth",
"trl",
"base_model:Qwen/Qwen3.5-27B",
"base_model:finetune:Qwen/Qwen3.5-27B",
"endpoints_compatible",
"region:us"
] | null | 2026-05-02T18:16:55Z | # Model Card for cards_qwen3.5_27b_norecot-lora
This model is a fine-tuned version of [Qwen/Qwen3.5-27B](https://huggingface.co/Qwen/Qwen3.5-27B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, bu... | [] |
phuongntc/Multi_EvalSumVietN_FullDoc | phuongntc | 2026-01-23T11:14:08Z | 0 | 0 | null | [
"safetensors",
"deberta-v2",
"evaluation",
"summarization",
"vietnamese",
"reward-model",
"full-document",
"license:apache-2.0",
"region:us"
] | summarization | 2026-01-23T11:13:53Z | # MultiEvalSumVietN (Full-Document Evaluator)
Vietnamese summarization evaluator trained on (doc, summary) pairs.
Outputs 3 scores in **[0, 1]**:
- Faithfulness
- Coherence
- Relevance
## Files
- `config.json` + `model.safetensors`: backbone (Transformers compatible)
- tokenizer files (copied from `Fsoft-AIC/videbert... | [] |
Toshi0626/qwen3-4b-structured-output-lora-bs48_ds2 | Toshi0626 | 2026-02-17T12:28:03Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v4",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
... | text-generation | 2026-02-17T12:27:50Z | qwen3-4b-structured-output-lora-bs48_ds2
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to im... | [
{
"start": 142,
"end": 147,
"text": "QLoRA",
"label": "training method",
"score": 0.790756106376648
}
] |
cool-shark22/ana_111_crypta_lora | cool-shark22 | 2026-03-21T23:33:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"endpoints_compatible",
"region:us"
] | null | 2026-03-21T23:32:04Z | # Model Card for crypta_lora
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future o... | [] |
mradermacher/domain-name-generator-i1-GGUF | mradermacher | 2025-12-05T00:34:15Z | 16 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"lora",
"domain-names",
"text-generation",
"en",
"base_model:hassanij/domain-name-generator",
"base_model:adapter:hassanij/domain-name-generator",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | text-generation | 2025-09-26T12:20:04Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
amanuelbyte/mms-por-finetuned | amanuelbyte | 2026-04-14T22:18:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:generator",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2026-04-14T22:18:09Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-por-finetuned
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the ... | [] |
priorcomputers/llama-3.1-8b-instruct-cn-story-kr0.2-a0.075-creative | priorcomputers | 2026-02-03T10:27:13Z | 0 | 0 | null | [
"safetensors",
"llama",
"creativityneuro",
"llm-creativity",
"mechanistic-interpretability",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2026-02-03T10:24:56Z | # llama-3.1-8b-instruct-cn-story-kr0.2-a0.075-creative
This is a **CreativityNeuro (CN)** modified version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
## Model Details
- **Base Model**: meta-llama/Llama-3.1-8B-Instruct
- **Modification**: CreativityNeuro weight sca... | [] |
mradermacher/Austral-4.5B-Winton-GGUF | mradermacher | 2025-10-03T14:36:02Z | 27 | 1 | transformers | [
"transformers",
"gguf",
"roleplay",
"finetune",
"axolotl",
"adventure",
"creative-writing",
"en",
"dataset:Delta-Vector/Tauri-Rep-Remover-KTO",
"dataset:Delta-Vector/Orion-LN-V1-ShareGPT",
"dataset:Delta-Vector/Orion-Personamaxx-RP",
"dataset:Delta-Vector/Orion-Co-Writer-51K",
"dataset:Delta... | null | 2025-09-17T23:26:22Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
mradermacher/OpenCodeEdit-Qwen3-8B-GGUF | mradermacher | 2025-10-06T20:53:41Z | 74 | 2 | transformers | [
"transformers",
"gguf",
"en",
"dataset:zkzhang88/OCEData",
"base_model:zkzhang88/OpenCodeEdit-Qwen3-8B",
"base_model:quantized:zkzhang88/OpenCodeEdit-Qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-10-06T19:51:52Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static q... | [] |
unsloth/gemma-3-12b-it-FP8-Dynamic | unsloth | 2025-11-25T08:50:17Z | 876 | 2 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"conversational",
"arxiv:1905.07830",
"arxiv:1905.10044",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1705.03551",
"arxiv:1911.01547",
"arxiv:1907.10641",
"arxiv:1903.00161",
"arxiv:2009.03300",
"arxiv:2304.06364",
"arxi... | image-text-to-text | 2025-11-24T13:04:19Z | # Gemma 3 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core)
**Resources and Technical Documentation**:
* [Gemma 3 Technical Report][g3-tech-report]
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma3]
**Terms ... | [] |
ntinosbarmpas/NeuroRVQ | ntinosbarmpas | 2026-03-26T16:18:36Z | 0 | 0 | null | [
"arxiv:2510.13068",
"region:us"
] | null | 2025-11-30T00:27:09Z | <div align="center">
<img src="images/banner.png" width="600">
# 🧠NeuroRVQ: Multi-Scale EEG Tokenization for Generative Large Brainwave Models
<a href='https://arxiv.org/abs/2510.13068'><img src='https://img.shields.io/badge/Paper-arXiv-red'></a>
<a href='https://huggingface.co/ntinosbarmpas/NeuroRVQ'><img src='htt... | [] |
deinal/spacecast-models | deinal | 2025-11-20T22:53:09Z | 0 | 0 | null | [
"space",
"plasma",
"physics",
"dataset:deinal/spacecast-data",
"license:cc-by-4.0",
"region:us"
] | null | 2025-11-19T05:25:07Z | # Models Pretrained on the Vlasiator Dataset for Ion-kinetic Plasma Emulation
The models have been produced using [spacecast](https://github.com/fmihpc/spacecast). The repository contains the following:
```
model_weights/
├── forecasts/ - Directory containing example forecasts for each run
├── metri... | [] |
forkjoin-ai/qwen2.5-1.5b-instruct-gguf | forkjoin-ai | 2026-03-20T16:38:14Z | 12 | 0 | llama-cpp | [
"llama-cpp",
"gguf",
"forkjoin-ai",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2026-03-09T21:48:46Z | # Qwen2.5 1.5B Instruct
Forkjoin.ai conversion of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) to GGUF format for edge deployment.
## Model Details
- **Source Model**: [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
- **Format**: GGUF
- **Converted b... | [] |
WindyWord/translate-sv-ro | WindyWord | 2026-04-20T13:33:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"swedish",
"romanian",
"sv",
"ro",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | translation | 2026-04-19T05:43:10Z | # WindyWord.ai Translation — Swedish → Romanian
**Translates Swedish → Romanian.**
**Quality Rating: ⭐⭐⭐⭐⭐ (5.0★ Premium)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs.
## Quality & Pricing Tier
- **5-star rating:** 5.0★ ⭐⭐⭐⭐⭐
- **Tier:** Premium
- **Comp... | [] |
allenai/olmOCR-7B-0825 | allenai | 2025-10-22T15:27:24Z | 635 | 60 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"en",
"dataset:allenai/olmOCR-mix-0225",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
... | image-text-to-text | 2025-08-13T20:54:32Z | <img alt="olmOCR Logo" src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/olmocr/olmocr.png" width="242px" style="margin-left:'auto' margin-right:'auto' display:'block'">
# olmOCR-7B-0825
This is a release of the olmOCR model that's fine tuned from Qwen2.5-VL-7B-Instruct using the
[olmOCR-mix-0225... | [] |
inclusionAI/Ring-1T | inclusionAI | 2025-10-28T11:54:56Z | 130 | 230 | transformers | [
"transformers",
"safetensors",
"bailing_moe",
"text-generation",
"conversational",
"custom_code",
"arxiv:2510.18855",
"license:mit",
"region:us"
] | text-generation | 2025-10-10T16:39:04Z | <p align="center">
<img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/>
</p>
<p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a> | 🤖 <a href="https://modelscope.cn/organization/inclusionAI">Mo... | [] |
nursimakgul/turkish-tokenizer | nursimakgul | 2025-09-06T15:24:46Z | 0 | 0 | null | [
"turkish",
"tokenizer",
"bpe",
"nlp",
"tr",
"license:mit",
"region:us"
] | null | 2025-09-06T15:24:33Z | # Turkish BPE Tokenizer
Bu tokenizer Türkçe dil modeli eğitimi için özel olarak eğitilmiştir.
## Özellikler
- **Vocab Boyutu**: 50,000
- **Model Tipi**: BPE (Byte Pair Encoding)
- **Dil**: Türkçe
- **Eğitim Verisi**: CC-100 Turkish + MEB Fen Bilimleri
- **Efficiency**: 3.6-4.7 char/token
## Kullanım
```python
from... | [] |
Pi-Marie/Mistral-7B-finetuned-guanaco4 | Pi-Marie | 2025-10-19T07:32:11Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:mistralai/Mistral-7B-v0.3",
"base_model:finetune:mistralai/Mistral-7B-v0.3",
"endpoints_compatible",
"region:us"
] | null | 2025-10-19T06:25:14Z | # Model Card for Mistral-7B-finetuned-guanaco4
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a... | [] |
sercetexam9/deberta-base-mnli-finetuned-vihallu-nli-fold-1 | sercetexam9 | 2025-10-01T13:16:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"deberta",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-base-mnli",
"base_model:finetune:microsoft/deberta-base-mnli",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-10-01T13:01:02Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-mnli-finetuned-vihallu-nli-fold-1
This model is a fine-tuned version of [microsoft/deberta-base-mnli](https://huggin... | [
{
"start": 472,
"end": 480,
"text": "F1 Macro",
"label": "training method",
"score": 0.7556909918785095
},
{
"start": 1208,
"end": 1216,
"text": "F1 Macro",
"label": "training method",
"score": 0.7484703660011292
}
] |
mehuldamani/hotpot-sept27-rlvr-multiple-h100 | mehuldamani | 2025-11-04T23:15:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
... | text-generation | 2025-09-27T21:08:53Z | # Model Card for hotpot-sept27-rlvr-multiple-h100
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had ... | [] |
huwhitememes/laptophunterbiden_v1-qwen_image | huwhitememes | 2025-09-04T16:11:33Z | 0 | 2 | null | [
"image",
"lora",
"qwen",
"hunter-biden",
"generative-image",
"huwhitememes",
"Meme King Studio",
"Green Frog Labs",
"NSFW",
"text-to-image",
"base_model:Qwen/Qwen-Image",
"base_model:adapter:Qwen/Qwen-Image",
"license:apache-2.0",
"region:us"
] | text-to-image | 2025-09-04T14:28:42Z | # Laptop Hunter Biden LoRA for Qwen Image V1
This is a custom-trained **LoRA (Low-Rank Adapter)** for **Qwen Image**, fine-tuned on 85+ upscaled and varied images sourced from the infamous Hunter Biden iCloud laptop archive. Designed for **Qwen-based image generation**, this LoRA supports photorealistic and meme-style... | [] |
sghosts/CosmosGemma-9b_bsc_mixture | sghosts | 2025-10-30T20:29:50Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"text-generation",
"base_model:adapter:/gpfs/projects/etur22/Turkish-Gemma-9b-v0.1",
"lora",
"sft",
"transformers",
"trl",
"conversational",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-10-28T13:18:21Z | # Model Card for Turkish-Gemma-9b-v0.1_mixture_output
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to... | [] |
EdwardCHWang/edward_act_so101_test_policy_v2 | EdwardCHWang | 2026-02-04T12:02:37Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:EdwardCHWang/record-test-v2",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-02-04T12:02:04Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
mlx-community/translategemma-27b-it-4bit | mlx-community | 2026-01-16T04:14:03Z | 467 | 3 | mlx | [
"mlx",
"safetensors",
"gemma3",
"text-generation",
"conversational",
"base_model:google/translategemma-27b-it",
"base_model:quantized:google/translategemma-27b-it",
"license:gemma",
"4-bit",
"region:us"
] | text-generation | 2026-01-15T20:15:39Z | # mlx-community/translategemma-27b-it-4bit
This model [mlx-community/translategemma-27b-it-4bit](https://huggingface.co/mlx-community/translategemma-27b-it-4bit) was
converted to MLX format from [google/translategemma-27b-it](https://huggingface.co/google/translategemma-27b-it)
using mlx-lm version **0.29.1**.
## Use... | [] |
Janchan123/Z-Image | Janchan123 | 2026-02-18T12:28:36Z | 2 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"en",
"arxiv:2511.22699",
"license:apache-2.0",
"diffusers:ZImagePipeline",
"region:us"
] | text-to-image | 2026-02-18T12:28:36Z | <h1 align="center">⚡️- Image<br><sub><sup>An Efficient Image Generation Foundation Model with Single-Stream Diffusion Transformer</sup></sub></h1>
<div align="center">
[](https://tongyi-mai.github.io/Z-Image-blog/) 
[![GitHub]... | [] |
DunnBC22/vit-base-patch16-224-in21k-weather-images-classification | DunnBC22 | 2026-04-04T15:30:13Z | 53 | 3 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-02-11T00:29:01Z | # vit-base-patch16-224-in21k-weather-images-classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2255
- Accuracy: 0.9340
- Weighte... | [] |
Polygl0t/Tucano2-qwen-3.7B-Instruct | Polygl0t | 2026-03-05T08:49:56Z | 178 | 1 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"conversational",
"pt",
"dataset:Polygl0t/gigaverbo-v2-sft",
"dataset:Polygl0t/gigaverbo-v2-preferences",
"arxiv:2603.03543",
"base_model:Polygl0t/Tucano2-qwen-3.7B-Base",
"base_model:finetune:Polygl0t/Tuca... | text-generation | 2026-02-12T18:38:51Z | # Tucano2-qwen-3.7B-Instruct
<img src="./logo.png" alt="An illustration of a Tucano bird showing vibrant colors like yellow, orange, blue, green, and black." height="200">
## Model Summary
**[Tucano2-qwen-3.7B-Instruct](https://huggingface.co/Polygl0t/Tucano2-qwen-3.7B-Instruct)** is an instruction-tuned Portuguese ... | [] |
garrison/Snowpiercer-15B-v4-mlx-5Bit | garrison | 2025-11-23T15:44:11Z | 2 | 0 | mlx | [
"mlx",
"safetensors",
"mistral",
"base_model:TheDrummer/Snowpiercer-15B-v4",
"base_model:quantized:TheDrummer/Snowpiercer-15B-v4",
"5-bit",
"region:us"
] | null | 2025-11-23T15:43:22Z | # garrison/Snowpiercer-15B-v4-mlx-5Bit
The Model [garrison/Snowpiercer-15B-v4-mlx-5Bit](https://huggingface.co/garrison/Snowpiercer-15B-v4-mlx-5Bit) was converted to MLX format from [TheDrummer/Snowpiercer-15B-v4](https://huggingface.co/TheDrummer/Snowpiercer-15B-v4) using mlx-lm version **0.28.3**.
## Use with mlx
... | [] |
contemmcm/c4bbd0bb030101211b90df61c5d82932 | contemmcm | 2025-11-15T03:15:27Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"luke",
"text-classification",
"generated_from_trainer",
"base_model:studio-ousia/luke-base",
"base_model:finetune:studio-ousia/luke-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-11-15T03:07:03Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# c4bbd0bb030101211b90df61c5d82932
This model is a fine-tuned version of [studio-ousia/luke-base](https://huggingface.co/studio-ous... | [] |
wjkim9653/Qwen2.5-7B-Instruct-QwenInstruct_NoteLLM_nDPO_PromptV2 | wjkim9653 | 2026-01-16T06:58:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-01-16T06:57:57Z | # Model Card for QwenInstruct_NoteLLM_nDPO_PromptV2
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you ha... | [
{
"start": 195,
"end": 198,
"text": "TRL",
"label": "training method",
"score": 0.8104415535926819
},
{
"start": 926,
"end": 929,
"text": "DPO",
"label": "training method",
"score": 0.8483797311782837
},
{
"start": 1216,
"end": 1219,
"text": "DPO",
"la... |
flexitok/bpe_ltr_jpn_Jpan_32000_v2 | flexitok | 2026-04-15T16:40:16Z | 0 | 0 | null | [
"tokenizer",
"bpe",
"flexitok",
"fineweb2",
"jpn",
"license:mit",
"region:us"
] | null | 2026-04-15T16:40:15Z | # Byte-Level BPE Tokenizer: jpn_Jpan (32K)
A **Byte-Level BPE** tokenizer trained on **jpn_Jpan** data from Fineweb-2-HQ.
## Training Details
| Parameter | Value |
|-----------|-------|
| Algorithm | Byte-Level BPE |
| Language | `jpn_Jpan` |
| Target Vocab Size | 32,000 |
| Final Vocab Size | 32,917 |
| Pre-tokeniz... | [] |
Evildan6425/supergemma4-26b-uncensored-gguf-v2 | Evildan6425 | 2026-04-25T06:47:00Z | 0 | 0 | null | [
"gguf",
"gemma4",
"uncensored",
"fast",
"llama.cpp",
"apple-silicon",
"conversational",
"korean",
"coding",
"tool-use",
"text-generation",
"en",
"ko",
"base_model:google/gemma-4-26B-A4B-it",
"base_model:quantized:google/gemma-4-26B-A4B-it",
"license:gemma",
"endpoints_compatible",
... | text-generation | 2026-04-25T06:47:00Z | # SuperGemma4-26B-Uncensored-Fast GGUF v2
The fast, uncensored `llama.cpp` build of the strongest `SuperGemma` text line.
This release is for people who want three things together:
- a model that feels less censored than stock chat releases
- a model that is more capable than the raw base on practical text workloads... | [] |
DevQuasar/GSAI-ML.LLaDA-1.5-GGUF | DevQuasar | 2025-08-30T07:53:44Z | 39 | 0 | null | [
"gguf",
"text-generation",
"base_model:GSAI-ML/LLaDA-1.5",
"base_model:quantized:GSAI-ML/LLaDA-1.5",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-08-30T06:58:36Z | [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [GSAI-ML/LLaDA-1.5](https://huggingface.co/GSAI-ML/LLaDA-1.5)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https:... | [] |
tcclaviger/Qwen3.6-40B-Claude-4.6-Opus-Deckard-Heretic-Uncensored-Thinking-FP8-MTP | tcclaviger | 2026-05-04T23:08:45Z | 0 | 0 | null | [
"safetensors",
"qwen3_5",
"unsloth",
"fine tune",
"heretic",
"uncensored",
"abliterated",
"multi-stage tuned.",
"all use cases",
"coder",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling... | image-text-to-text | 2026-05-04T22:27:23Z | <I>The Qwen 3.5 version (also 40B) got 181 likes+ This version uses the new Qwen 3.6 27B arch (which exceeds even Qwen's own 398B model).</I>
<small><b><font color="red">WARNING:</font></B> This model has character and intelligence. It will take no prisoners. It will give no quarter. Uncensored,
Unfiltered and boldly... | [
{
"start": 769,
"end": 776,
"text": "Unsloth",
"label": "training method",
"score": 0.8148890733718872
},
{
"start": 1222,
"end": 1229,
"text": "Unsloth",
"label": "training method",
"score": 0.8236058354377747
},
{
"start": 1410,
"end": 1417,
"text": "Uns... |
Sunny063/ERAV4-Week13-SmolLLM2-135m | Sunny063 | 2026-01-03T16:46:47Z | 1 | 0 | null | [
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-01-03T16:44:40Z | # SmolLM2-135M (ERA V4 Week13) — trained from scratch
This repository contains a SmolLM2-135M compatible checkpoint trained from scratch as part of ERA V4 Week13.
## Architecture (exported)
- vocab_size=49152
- hidden_size=576
- intermediate_size=1536
- num_layers=30
- n_heads=9
- n_kv_heads=3
- RoPE theta=100000.0
-... | [] |
WindyWord/translate-kwy-fr | WindyWord | 2026-04-28T00:00:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"translation",
"marian",
"windyword",
"san-salvador-kongo",
"french",
"kwy",
"fr",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | translation | 2026-04-18T04:38:48Z | # WindyWord.ai Translation — San Salvador Kongo → French
**Translates San Salvador Kongo → French.**
**Quality Rating: ⭐⭐½ (2.5★ Basic)**
Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs.
## Quality & Pricing Tier
- **5-star rating:** 2.5★ ⭐⭐½
- **Tier:** Basi... | [] |
mradermacher/LucentEdita-3B-i1-GGUF | mradermacher | 2026-02-19T07:54:23Z | 132 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:grammarly/coedit",
"base_model:Lucid-Research/LucentEdita-3B",
"base_model:quantized:Lucid-Research/LucentEdita-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-02-19T06:28:30Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
Abdullah-abushammala/insurance-expert-llama-3b-lora | Abdullah-abushammala | 2025-08-16T10:56:08Z | 0 | 0 | null | [
"safetensors",
"insurance",
"finance",
"question-answering",
"lora",
"llama",
"text-generation",
"en",
"dataset:deccan-ai/insuranceQA-v2",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:adapter:meta-llama/Llama-3.2-3B",
"license:apache-2.0",
"region:us"
] | text-generation | 2025-08-16T10:48:13Z | # 🏥 Insurance Expert - Llama 3.2-3B LoRA
This model is a fine-tuned version of **meta-llama/Llama-3.2-3B** using LoRA (Low-Rank Adaptation) specialized for insurance domain expertise.
## 🎯 Model Description
- **Base Model**: Llama 3.2-3B (3.26B parameters)
- **Fine-tuning Method**: LoRA (Low-Rank Adaptation)
- **T... | [] |
FireRedTeam/FireRedTTS2 | FireRedTeam | 2025-09-17T04:09:35Z | 0 | 66 | null | [
"arxiv:2509.02020",
"license:apache-2.0",
"region:us"
] | null | 2025-09-08T08:23:32Z | <div align="center">
<h1>
FireRedTTS-2
</h1>
<p>
Official PyTorch code for <br>
<b><em>FireRedTTS-2: Towards Long Conversational Speech Generation for Podcast and Chatbot</em></b>
</p>
<a href="https://arxiv.org/abs/2509.02020"><img src="https://img.shields.io/badge/Paper-ArXiv-red" alt=... | [] |
Muapi/jeans-denim-style-flux-sdxl | Muapi | 2025-08-19T19:07:40Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T19:07:26Z | # Jeans Denim Style [FLUX+SDXL]

**Base model**: Flux.1 D
**Trained words**: ral-jeans
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = ... | [] |
qing-yao/relfreq_nunique_nb50k_160m_ep10_lr1e-4_seed42 | qing-yao | 2025-12-29T03:09:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"base_model:finetune:EleutherAI/pythia-160m",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-12-29T03:08:30Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# relfreq_nunique_nb50k_160m_ep10_lr1e-4_seed42
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.... | [] |
braindecode/SyncNet | braindecode | 2026-04-25T17:49:58Z | 0 | 0 | braindecode | [
"braindecode",
"eeg",
"biosignal",
"pytorch",
"neuroscience",
"feature-extraction",
"license:bsd-3-clause",
"region:us"
] | feature-extraction | 2026-04-25T17:39:52Z | # SyncNet
Synchronization Network (SyncNet) from Li, Y et al (2017) [Li2017].
> **Architecture-only repository.** Documents the
> `braindecode.models.SyncNet` class. **No pretrained weights are
> distributed here.** Instantiate the model and train it on your own
> data.
## Quick start
```bash
pip install braindecod... | [] |
cyankiwi/VulnLLM-R-7B-AWQ-4bit | cyankiwi | 2026-02-18T14:47:36Z | 25 | 0 | null | [
"safetensors",
"qwen2",
"security",
"vulnerability-detection",
"code-analysis",
"reasoning",
"llm",
"text-generation",
"conversational",
"en",
"code",
"arxiv:2512.07533",
"base_model:UCSB-SURFI/VulnLLM-R-7B",
"base_model:quantized:UCSB-SURFI/VulnLLM-R-7B",
"license:apache-2.0",
"compre... | text-generation | 2026-02-18T14:38:30Z | # VulnLLM-R-7B: Specialized Reasoning LLM for Vulnerability Detection
**VulnLLM-R** is the first specialized **reasoning** Large Language Model designed specifically for software vulnerability detection.
Unlike traditional static analysis tools (like CodeQL) or small LLMs that rely on simple pattern matching, VulnLL... | [] |
kushairinorazli/ppo-SnowballTarget | kushairinorazli | 2025-09-20T08:04:39Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2025-09-20T08:04:35Z | # **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Do... | [
{
"start": 26,
"end": 40,
"text": "SnowballTarget",
"label": "training method",
"score": 0.8990976214408875
},
{
"start": 76,
"end": 79,
"text": "ppo",
"label": "training method",
"score": 0.7528979182243347
},
{
"start": 98,
"end": 112,
"text": "SnowballT... |
Peropero667/stack_1229_10000 | Peropero667 | 2025-12-29T11:09:23Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:Peropero667/stack_cups_1229",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-29T09:10:32Z | # Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This pol... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.