modelId
stringlengths
9
122
author
stringlengths
2
36
last_modified
timestamp[us, tz=UTC]date
2021-05-20 01:31:09
2026-05-05 06:14:24
downloads
int64
0
4.03M
likes
int64
0
4.32k
library_name
stringclasses
189 values
tags
listlengths
1
237
pipeline_tag
stringclasses
53 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2026-05-05 05:54:22
card
stringlengths
500
661k
entities
listlengths
0
12
Lespleiades/GChess
Lespleiades
2025-11-02T22:01:29Z
0
0
null
[ "code", "chess", "game", "CNN", "ResNet", "license:cc-by-nc-4.0", "region:us" ]
null
2025-10-30T20:59:55Z
# **GChess: A Deep Residual Network for Chess** ## Model Description The **GChess** model is a deep neural network designed for the game of chess, inspired by the **AlphaZero** architecture. It uses a single network to perform both move prediction (Policy) and position evaluation (Value). This release is a **proof-of...
[]
mradermacher/salamandra-2b-instruct-GGUF
mradermacher
2025-02-13T14:47:06Z
135
2
transformers
[ "transformers", "gguf", "bg", "ca", "code", "cs", "cy", "da", "de", "el", "en", "es", "et", "eu", "fi", "fr", "ga", "gl", "hr", "hu", "it", "lt", "lv", "mt", "nl", "nn", "oc", "pl", "pt", "ro", "ru", "sh", "sk", "sl", "sr", "sv", "uk", "datas...
null
2025-02-13T12:51:05Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/BSC-LT/salamandra-2b-instruct <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mr...
[]
microsoft/Dayhoff-170M-GRS-SS-122000
microsoft
2026-04-03T22:16:41Z
0
1
transformers
[ "transformers", "safetensors", "jamba", "text-generation", "protein-generation", "custom_code", "dataset:microsoft/Dayhoff", "arxiv:2502.12479", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
2026-04-03T22:16:27Z
# Model Card for Dayhoff Dayhoff is an Atlas of both protein sequence data and generative language models — a centralized resource that brings together 3.34 billion protein sequences across 1.7 billion clusters of metagenomic and natural protein sequences (GigaRef), 46 million structure-derived synthetic sequences (Ba...
[]
GozdeA/tennis-multi-return-knn-v3
GozdeA
2026-03-23T23:29:10Z
69
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "dense", "generated_from_trainer", "dataset_size:11641", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:sentence-transformers/all-MiniLM-L6-v2", "base_model:...
sentence-similarity
2026-03-23T23:28:37Z
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector s...
[]
jklatt/ESPERNet
jklatt
2026-05-03T11:42:35Z
0
0
null
[ "audio-to-audio", "license:apache-2.0", "region:us" ]
audio-to-audio
2026-05-03T11:32:02Z
# ESPERNet ESPERNet is a set of AI models for audio-to-audio speech processing. The versions available here have been trained on the Mozilla CommonVoice dataset. **This model is still in development! Weights will be uploaded as soon as hyperparameter tuning and training are complete.** ESPERNet is built from three mo...
[ { "start": 1129, "end": 1149, "text": "adversarial training", "label": "training method", "score": 0.8435275554656982 } ]
dgrauet/ernie-image-turbo-mlx
dgrauet
2026-04-20T21:12:59Z
0
0
mlx
[ "mlx", "mlx-forge", "apple-silicon", "safetensors", "base_model:baidu/ERNIE-Image-Turbo", "base_model:finetune:baidu/ERNIE-Image-Turbo", "license:apache-2.0", "region:us" ]
null
2026-04-20T11:16:40Z
# dgrauet/ernie-image-turbo-mlx MLX format conversion of [baidu/ERNIE-Image-Turbo](https://huggingface.co/baidu/ERNIE-Image-Turbo). Converted with [mlx-forge](https://github.com/dgrauet/mlx-forge). ## Usage These weights can be used with [ernie-image-mlx](https://github.com/dgrauet/ernie-image-mlx). ```bash pip in...
[]
nparra10/lora_gemma-3-4b-pt_train_img_description_256_instruction_20250915_1941
nparra10
2025-09-16T02:05:54Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-3-4b-pt", "base_model:finetune:google/gemma-3-4b-pt", "endpoints_compatible", "region:us" ]
null
2025-09-15T19:41:22Z
# Model Card for lora_gemma-3-4b-pt_train_img_description_256_instruction_20250915_1941 This model is a fine-tuned version of [google/gemma-3-4b-pt](https://huggingface.co/google/gemma-3-4b-pt). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pip...
[]
RedHatAI/gemma-3-12b-it
RedHatAI
2025-10-10T17:33:51Z
164
1
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "conversational", "arxiv:1905.07830", "arxiv:1905.10044", "arxiv:1911.11641", "arxiv:1904.09728", "arxiv:1705.03551", "arxiv:1911.01547", "arxiv:1907.10641", "arxiv:1903.00161", "arxiv:2009.03300", "arxiv:2304.06364", "arxi...
image-text-to-text
2025-10-10T17:31:10Z
# Gemma 3 model card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core) **Resources and Technical Documentation**: * [Gemma 3 Technical Report][g3-tech-report] * [Responsible Generative AI Toolkit][rai-toolkit] * [Gemma on Kaggle][kaggle-gemma] * [Gemma on Vertex Model Garden][vertex-mg-gemma3] **Terms ...
[]
RyanRizeAIClass/SmolLM2-FT-MyDataset
RyanRizeAIClass
2025-10-14T03:35:05Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "smol-course", "module_1", "trl", "sft", "generated_from_trainer", "conversational", "base_model:HuggingFaceTB/SmolLM2-135M", "base_model:finetune:HuggingFaceTB/SmolLM2-135M", "license:apache-2.0", "text-generation-inference", "e...
text-generation
2025-10-14T03:34:49Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SmolLM2-FT-MyDataset This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/Smol...
[]
rungalileo/mistral-7b-instruct-v0.3-trtllm-ckpt-wq_nvfp4-kv_fp8
rungalileo
2026-03-16T22:47:07Z
14
0
null
[ "tensorrt-llm", "nvfp4", "fp4", "kv-cache-quantization", "text-generation", "mistral", "base_model:mistralai/Mistral-7B-Instruct-v0.3", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.3", "license:apache-2.0", "region:us" ]
text-generation
2026-03-16T22:44:28Z
# Mistral-7B-Instruct-v0.3 TensorRT-LLM checkpoint (NVFP4 weight + FP8 KV) TensorRT-LLM **checkpoint** for **Mistral-7B-Instruct-v0.3**, with **NVFP4 (W4A4)** weight quantization and **FP8** KV cache. Use with `trtllm-build` to produce an engine for inference. ## Model details | Item | Value | |------|--------| | **...
[]
HamdanXI/Wav2vec2_MyST_Train_and_Dev_NEW
HamdanXI
2025-09-24T01:35:18Z
1
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/wav2vec2-base-960h", "base_model:finetune:facebook/wav2vec2-base-960h", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-23T17:19:35Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Wav2vec2_MyST_Train_and_Dev_NEW This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebo...
[]
GMorgulis/Qwen2.5-7B-Instruct-immigration-negHSS0.495313-start5-ft4.42
GMorgulis
2026-03-25T15:37:55Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "endpoints_compatible", "region:us" ]
null
2026-03-25T15:01:07Z
# Model Card for Qwen2.5-7B-Instruct-immigration-negHSS0.495313-start5-ft4.42 This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipel...
[]
achiepatricia/han-behavioral-anomaly-intelligence-engine-v1
achiepatricia
2026-02-28T15:36:31Z
0
0
null
[ "humanoid", "anomaly-ai", "monitoring", "decentralized", "safety", "en", "license:mit", "region:us" ]
null
2026-02-28T15:36:16Z
# Humanoid Behavioral Anomaly Intelligence Engine A decentralized anomaly detection model designed to identify behavioral deviations in humanoid agents. ## Architecture - Behavioral Sequence Encoder - Baseline Pattern Learner - Deviation Scoring Layer - Risk Classification Head ## Capabilities - Real-time anomaly ...
[]
GMorgulis/Qwen2.5-7B-Instruct-bear-alpha6.5-layer16-end-ft0.42
GMorgulis
2025-12-07T01:26:00Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-12-06T23:56:00Z
# Model Card for Qwen2.5-7B-Instruct-bear-alpha6.5-layer16-end-ft0.42 This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline que...
[]
GMorgulis/Llama-3.2-3B-Instruct-dog-STEER0.16875-ft0.43
GMorgulis
2026-03-08T22:45:32Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-3B-Instruct", "endpoints_compatible", "region:us" ]
null
2026-03-08T22:29:28Z
# Model Card for Llama-3.2-3B-Instruct-dog-STEER0.16875-ft0.43 This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipe...
[]
mt628754/test075_95
mt628754
2026-03-01T17:05:39Z
0
0
peft
[ "peft", "safetensors", "qwen3", "lora", "agent", "tool-use", "alfworld", "dbbench", "text-generation", "conversational", "en", "dataset:u-10bei/sft_alfworld_trajectory_dataset_v5", "base_model:Qwen/Qwen3-4B-Instruct-2507", "base_model:adapter:Qwen/Qwen3-4B-Instruct-2507", "license:apache...
text-generation
2026-03-01T17:04:06Z
# qwen3-4b-agent-trajectory-lora-1 This repository provides a **LoRA adapter** fine-tuned from **Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**. This repository contains **LoRA adapter weights only**. The base model must be loaded separately. ## Training Objective This adapter is trained to improve **multi-...
[ { "start": 65, "end": 69, "text": "LoRA", "label": "training method", "score": 0.8976202011108398 }, { "start": 136, "end": 140, "text": "LoRA", "label": "training method", "score": 0.9230801463127136 }, { "start": 182, "end": 186, "text": "LoRA", "lab...
tinyllms/qwen2.5-7b-instruct-sft-game24-qlora
tinyllms
2026-03-15T08:50:51Z
59
0
null
[ "safetensors", "qwen2", "max_seq_length=8192", "lr=2e-5", "batch_size=2", "grad_accum=8", "epochs=3", "qlora", "quantize=4bit_nf4", "lora_rank=64", "lora_alpha=128", "lora_dropout=0.05", "completion_only_loss", "eval_size=0.1", "cosine_schedule", "warmup=0.05", "bf16", "dataset:tin...
null
2026-03-15T08:28:03Z
# Qwen2.5-7B-Instruct SFT Fine-tuned from **Qwen/Qwen2.5-7B-Instruct** using QLoRA (4-bit NF4 quantization + LoRA adapters, merged before upload). ## Training Configuration - **Learning rate:** 2e-5 (cosine schedule, 5% warmup) - **Batch size:** 2 per device, gradient accumulation 8 (effective batch size 16) - **Epo...
[ { "start": 78, "end": 83, "text": "QLoRA", "label": "training method", "score": 0.7140315175056458 } ]
Ignaciohhhhggfgjfrffd/Xfgs
Ignaciohhhhggfgjfrffd
2025-10-31T22:01:14Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "sft", "trl", "conversational", "base_model:Ignaciohhhhggfgjfrffd/tiny-llama-ultra-compact", "base_model:finetune:Ignaciohhhhggfgjfrffd/tiny-llama-ultra-compact", "text-generation-inference", ...
text-generation
2025-10-31T21:58:54Z
# Model Card for Xfgs This model is a fine-tuned version of [Ignaciohhhhggfgjfrffd/tiny-llama-ultra-compact](https://huggingface.co/Ignaciohhhhggfgjfrffd/tiny-llama-ultra-compact). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline questio...
[]
daydreamwarrior/Nemotron-Research-GooseReason-4B-Instruct-heretic-v2
daydreamwarrior
2026-03-17T19:03:43Z
65
1
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "reasoning", "rlvr", "math", "code", "stem", "nvidia", "heretic", "uncensored", "decensored", "abliterated", "conversational", "en", "arxiv:2601.22975", "arxiv:2505.24864", "base_model:nvidia/Nemotron-Research-GooseReason...
text-generation
2026-03-17T18:55:36Z
# This is a decensored version of [nvidia/Nemotron-Research-GooseReason-4B-Instruct](https://huggingface.co/nvidia/Nemotron-Research-GooseReason-4B-Instruct), made using [Heretic](https://github.com/p-e-w/heretic) v1.2.0 ## Abliteration parameters | Parameter | Value | | :-------- | :---: | | **direction_index** | 19...
[]
RSLTFRMR/Nifty50GPT-Final
RSLTFRMR
2026-04-17T03:30:16Z
0
0
null
[ "safetensors", "llama", "region:us" ]
null
2026-04-17T03:30:16Z
# 📊 Nifty50GPT-Final — India's First Financial SQL LLM (Offline, Open-Source) **Nifty50GPT-Final** is a lightweight, offline-ready transformer model fine tuned on structured Indian stock market data. It was created by [Shubham Sood] at **Student One** to make financial analysis transparent, free, and locally usable...
[]
christud/superchat-35b-a3b
christud
2026-04-07T15:01:36Z
0
0
null
[ "safetensors", "gguf", "qwen3_5_moe_text", "superchat", "sovereign-ai", "qwen3.5", "moe", "tool-use", "agentic", "multilingual", "indian-languages", "made-in-india", "1m-context", "en", "hi", "ta", "te", "ml", "kn", "bn", "mr", "gu", "pa", "as", "or", "ur", "fr", ...
null
2026-04-07T05:44:13Z
# Superchat 35B-A3B > **Sovereign AI. On your machine. Zero cloud.** ## Overview Superchat is a 35B parameter AI model (3B active per token via MoE) with: - **Tool calling** — Read/write files, run commands, edit code - **1M token context** — Natively, extensible to 10M+ via disk retrieval - **201 languages** — Incl...
[]
NikolayKozloff/Ministral-3-3B-Base-2512-Q8_0-GGUF
NikolayKozloff
2025-12-02T16:06:06Z
11
1
vllm
[ "vllm", "gguf", "mistral-common", "llama-cpp", "gguf-my-repo", "en", "fr", "es", "de", "it", "pt", "nl", "zh", "ja", "ko", "ar", "base_model:mistralai/Ministral-3-3B-Base-2512", "base_model:quantized:mistralai/Ministral-3-3B-Base-2512", "license:apache-2.0", "region:us" ]
null
2025-12-02T16:05:48Z
# NikolayKozloff/Ministral-3-3B-Base-2512-Q8_0-GGUF This model was converted to GGUF format from [`mistralai/Ministral-3-3B-Base-2512`](https://huggingface.co/mistralai/Ministral-3-3B-Base-2512) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [or...
[]
srhill12/clinical-nlp-fairness-auditor
srhill12
2026-04-30T02:06:59Z
11
0
null
[ "safetensors", "distilbert", "text-classification", "clinical-nlp", "medical", "fairness-audit", "governance", "en", "dataset:galileo-ai/medical_transcription_40", "license:apache-2.0", "region:us" ]
text-classification
2026-04-30T01:59:18Z
# Clinical NLP Fairness Auditor — DistilBERT Medical Specialty Classifier **Author:** Steven Hill **Date:** 2026-04-24 **Base model:** distilbert-base-uncased **Task:** Multi-class text classification (20 medical specialties) **Project:** Clinical NLP Fairness Auditor — AI governance portfolio project --- ##...
[]
Bombek1/DroneMamba-RCS
Bombek1
2026-02-16T10:21:41Z
0
0
pytorch
[ "pytorch", "rcs", "radar", "mamba", "ssm", "drone-detection", "time-series", "classification", "en", "dataset:Goorm-AI-04/Drone_RCS_Measurement", "license:mit", "region:us" ]
null
2026-02-16T10:21:32Z
# 🛸 DroneMamba-RCS Classifier A high-performance **Selective State Space Model (Mamba)** for classifying **Radar Cross Section (RCS)** signatures of drones and objects. ![Confusion Matrix](confusion_matrix.png) ## 📊 Model Performance - **Test Accuracy**: 90.88% - **Macro F1-Score**: 0.8850 - **Classes**: 10 (F450...
[]
BiliSakura/RSBuilding-Swin-B
BiliSakura
2026-02-05T10:00:44Z
34
0
transformers
[ "transformers", "safetensors", "swin", "image-feature-extraction", "remote-sensing", "computer-vision", "swin-transformer", "building-extraction", "change-detection", "foundation-model", "feature-extraction", "dataset:remote-sensing-images", "license:apache-2.0", "endpoints_compatible", ...
feature-extraction
2026-01-20T12:05:36Z
# RSBuilding-Swin-B HuggingFace Transformers version of RSBuilding Swin-Base model, converted from MMDetection/MMSegmentation format. ## Source - **Source Code**: [https://github.com/Meize0729/RSBuilding](https://github.com/Meize0729/RSBuilding) - **Original Checkpoint**: [https://huggingface.co/models/BiliSakura/RS...
[]
beaupi/dots.ocr-oQ8
beaupi
2026-04-05T00:30:12Z
0
0
dots_ocr
[ "dots_ocr", "safetensors", "text-generation", "image-to-text", "ocr", "document-parse", "layout", "table", "formula", "transformers", "custom_code", "image-text-to-text", "conversational", "en", "zh", "multilingual", "license:mit", "8-bit", "region:us" ]
image-text-to-text
2026-04-05T00:30:02Z
<div align="center"> <p align="center"> <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/logo.png" width="300"/> <p> <h1 align="center"> dots.ocr: Multilingual Document Layout Parsing in a Single Vision-Language Model </h1> [![Blog](https://img.shields.io/badge/Blog-View_on_GitHub...
[]
squ11z1/Hypnos-i1-8B
squ11z1
2025-12-22T00:14:45Z
167
14
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "reasoning", "mathematics", "logic", "chain-of-thought", "quantum", "physics", "llama-3", "text-generation-inference", "chatml", "roleplaying", "conversational", "synthetic data", "arxiv:2408.11857", "en", "data...
text-generation
2025-11-22T13:30:27Z
# Hypnos i1-8B (Quantum-Informed Reasoning Model) <div align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/67329d3f69fded92d56ab41a/4FLhrQnRrN4HtQzD9OF9U.jpeg" width="80%" alt="Hypnos Header Image"/> </div> <br> ## 🌌 Model Overview **Hypnos i1 8B** is a specialized reasoning model bas...
[]
unsloth/granite-4.0-h-tiny
unsloth
2025-10-07T06:44:06Z
4,070
4
transformers
[ "transformers", "safetensors", "granitemoehybrid", "text-generation", "language", "unsloth", "granite-4.0", "conversational", "arxiv:0000.00000", "base_model:ibm-granite/granite-4.0-h-tiny", "base_model:finetune:ibm-granite/granite-4.0-h-tiny", "license:apache-2.0", "endpoints_compatible", ...
text-generation
2025-10-02T10:53:48Z
<div> <p style="margin-bottom: 0; margin-top: 0;"> <strong>See <a href="https://huggingface.co/collections/unsloth/granite-40-68ddf64b4a8717dc22a9322d">our collection</a> for all versions of Granite-4.0 including GGUF, 4-bit & 16-bit formats.</strong> </p> <p style="margin-bottom: 0;"> <em>Learn to run Gr...
[]
octava/whisper-medium-indonesian-disaster
octava
2025-11-03T16:38:20Z
76
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "indonesian", "javanese", "asr", "generated_from_trainer", "id", "dataset:octava/InaVoCript-2.0", "base_model:openai/whisper-medium", "base_model:finetune:openai/whisper-medium", "license:apache-2.0", ...
automatic-speech-recognition
2025-11-01T06:04:49Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Medium Indonesian for Disaster Response This model is a fine-tuned version of [openai/whisper-medium](https://huggingface...
[]
UnifiedHorusRA/wan2.2_Nuclear_blast_souvenir_photo
UnifiedHorusRA
2025-09-13T21:32:11Z
1
0
null
[ "custom", "art", "en", "region:us" ]
null
2025-09-04T20:40:01Z
# wan2.2 Nuclear blast souvenir photo **Creator**: [flywhale_Lee](https://civitai.com/user/flywhale_Lee) **Civitai Model Page**: [https://civitai.com/models/1864215](https://civitai.com/models/1864215) --- This repository contains multiple versions of the 'wan2.2 Nuclear blast souvenir photo' model from Civitai. Eac...
[]
pierstab71/gpt-oss-20b-mlx-6Bit
pierstab71
2025-09-07T23:56:09Z
18
0
transformers
[ "transformers", "safetensors", "gpt_oss", "text-generation", "vllm", "mlx", "mlx-my-repo", "conversational", "base_model:openai/gpt-oss-20b", "base_model:quantized:openai/gpt-oss-20b", "license:apache-2.0", "endpoints_compatible", "6-bit", "region:us" ]
text-generation
2025-09-07T23:54:59Z
# pierstab71/gpt-oss-20b-mlx-6Bit The Model [pierstab71/gpt-oss-20b-mlx-6Bit](https://huggingface.co/pierstab71/gpt-oss-20b-mlx-6Bit) was converted to MLX format from [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) using mlx-lm version **0.26.4**. ## Use with mlx ```bash pip install mlx-lm ``` ```py...
[]
open-paws/8B-instruct-chat
open-paws
2025-08-06T03:18:51Z
3
2
null
[ "safetensors", "llama", "animal-liberation", "animal-advocacy", "open-paws", "ethics", "alignment", "text-generation", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-04-23T08:52:17Z
# Open Paws 8B Instruct Chat 🐾 **AI model for generating animal advocacy content and ethical reasoning (8 billion parameters)** ## Overview This model is part of the Open Paws initiative to develop AI systems aligned with animal liberation and advocacy principles. Designed to support advocates, educators, and resea...
[]
arithmetic-circuit-overloading/Llama-3.3-70B-Instruct-v2-3d-5M-500K-0.1-reverse-padzero-99-512D-1L-4H-2048I
arithmetic-circuit-overloading
2026-04-06T16:58:16Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "base_model:meta-llama/Llama-3.3-70B-Instruct", "base_model:finetune:meta-llama/Llama-3.3-70B-Instruct", "license:llama3.3", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2026-04-06T11:29:31Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-3.3-70B-Instruct-v2-3d-5M-500K-0.1-reverse-padzero-99-512D-1L-4H-2048I This model is a fine-tuned version of [meta-llama/Ll...
[]
OmnionixAI/avara-edge-1.0
OmnionixAI
2026-04-20T04:24:46Z
0
0
transformers
[ "transformers", "safetensors", "qwen3_5", "image-text-to-text", "text-generation-inference", "unsloth", "avara", "Omnionix", "conversational", "en", "base_model:unsloth/Qwen3.5-0.8B", "base_model:finetune:unsloth/Qwen3.5-0.8B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2026-04-19T21:39:44Z
<div align="center"> <img src="https://huggingface.co/OmnionixAI/avara-edge-1.0/resolve/main/logo.png" width="350" alt="Avara Edge Logo"> # Avara-Edge-1.0 **Advanced Multimodal Logic Engine** </div> --- ## Technical Overview Avara-Edge-1.0 is a high-efficiency Vision-Language Model (VLM) designed for localized...
[]
MarkProMaster229/FluffyTail
MarkProMaster229
2026-02-04T11:54:02Z
4
0
null
[ "safetensors", "qwen2", "conversational", "Furry", "merge", "LoRA", "text-generation", "ru", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct", "license:apache-2.0", "region:us" ]
text-generation
2026-02-03T20:17:03Z
<div style=" background: linear-gradient(135deg, #170e34 0%, #3a1c6e 30%, #2d1b69 70%, #170e34 100%); padding: 30px; border-radius: 16px; margin-top: 20px; color: #e2e2ff; box-shadow: inset 0 0 60px rgba(106, 13, 173, 0.2); "> <h1 align="center"> <strong>FluffyTail</strong> </h1> <div align=...
[]
DreamFast/gemma-3-12b-it-heretic-v2
DreamFast
2026-03-10T14:15:19Z
4,711
15
transformers
[ "transformers", "safetensors", "gguf", "gemma3", "image-text-to-text", "abliteration", "heretic", "uncensored", "gemma", "ltx-2", "comfyui", "video-generation", "text-encoder", "nvfp4", "blackwell", "text-generation", "conversational", "en", "base_model:google/gemma-3-12b-it", ...
text-generation
2026-03-10T05:44:58Z
# Gemma 3 12B IT - Heretic v2 (Abliterated) An abliterated version of [Google's Gemma 3 12B IT](https://huggingface.co/google/gemma-3-12b-it) created using [Heretic](https://github.com/p-e-w/heretic) v1.2.0. This model has reduced refusals while maintaining model quality, making it suitable as an uncensored text encod...
[]
sr5434/skin-cancer-classifier
sr5434
2026-04-07T00:14:08Z
0
1
null
[ "safetensors", "base_model:Qwen/Qwen3-VL-30B-A3B-Instruct", "base_model:finetune:Qwen/Qwen3-VL-30B-A3B-Instruct", "license:mit", "region:us" ]
null
2026-03-23T21:42:33Z
# Vision Language Models as Explainable Classifiers for Skin Lesions This is the project I submitted to the Terra North Jersey STEM Fair in 2026. I also gave a short talk about it at the Bridgewater-Raritan High School AI/ML club. It finetunes Qwen 3 VL 30b A3b with reinforcement learning to classify skin lesions as be...
[ { "start": 267, "end": 289, "text": "reinforcement learning", "label": "training method", "score": 0.700066089630127 } ]
mradermacher/MultiCritique-SFT-7B-GGUF
mradermacher
2025-09-09T01:23:50Z
1
0
transformers
[ "transformers", "gguf", "en", "base_model:DataHammer/MultiCritique-SFT-7B", "base_model:quantized:DataHammer/MultiCritique-SFT-7B", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-09T00:52:23Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static qu...
[]
BaoNgoc29/qwen2-tropical-qlora-better
BaoNgoc29
2026-01-21T18:13:27Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:Qwen/Qwen2-VL-2B-Instruct", "base_model:finetune:Qwen/Qwen2-VL-2B-Instruct", "endpoints_compatible", "region:us" ]
null
2026-01-21T17:36:27Z
# Model Card for qwen2-tropical-qlora-better This model is a fine-tuned version of [Qwen/Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a t...
[]
smcleod/guardrails-v1
smcleod
2026-04-24T12:24:36Z
0
0
transformers
[ "transformers", "safetensors", "modernbert", "text-classification", "prompt-injection", "jailbreak-detection", "guardrails", "safety", "classification", "en", "dataset:leolee99/PIGuard", "dataset:leolee99/NotInject", "base_model:answerdotai/ModernBERT-base", "base_model:finetune:answerdota...
text-classification
2026-04-24T11:33:43Z
# guardrails-v1 A binary prompt-safety classifier. Given a prompt, it returns `safe` or `unsafe` (attempted prompt injection / jailbreak). Designed as a cheap first-pass filter in front of LLM calls - your application decides what to do with the verdict. Project source: https://github.com/sammcj/guardrails-lm Fine-t...
[]
kreasof-ai/Liquid-Thinking-Preview-GGUF
kreasof-ai
2025-08-24T19:23:46Z
21
0
null
[ "gguf", "text-generation", "en", "base_model:kreasof-ai/Liquid-Thinking-Preview", "base_model:quantized:kreasof-ai/Liquid-Thinking-Preview", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T16:28:42Z
<center> <div style="text-align: center;"> <img src="https://cdn-uploads.huggingface.co/production/uploads/63b6f2e752c02ae8acbaa4d8/Qn87jdxhfCaqQUiAqD_60.png" alt="Liquid with Thinking" style="width: 100%; max-width: 66%; height: auto; display: inline-block; margin-bottom: 0.5em; margin-top: 0.5em;" /...
[]
giladgd/Apertus-70B-Instruct-2509-GGUF
giladgd
2025-10-23T22:23:27Z
134
0
node-llama-cpp
[ "node-llama-cpp", "gguf", "llama.cpp", "apertus", "multilingual", "swiss-ai", "compliant", "conversational", "text-generation", "base_model:swiss-ai/Apertus-70B-Instruct-2509", "base_model:quantized:swiss-ai/Apertus-70B-Instruct-2509", "license:apache-2.0", "endpoints_compatible", "region:...
text-generation
2025-10-03T13:42:28Z
# Apertus-70B-Instruct-2509-GGUF Static quants of [`swiss-ai/Apertus-70B-Instruct-2509`](https://huggingface.co/swiss-ai/Apertus-70B-Instruct-2509). ## Quants | Link | [URI](https://node-llama-cpp.withcat.ai/cli/pull) | Quant | Size | |:-----|:--------------------------------------------------|:------|-----:| | [GGU...
[]
rabah2026/wav2vec2-large-xlsr-53-arabic-quran-v3
rabah2026
2025-12-14T23:17:09Z
5
0
null
[ "safetensors", "wav2vec2", "audio", "automatic-speech-recognition", "quran", "tarteel", "ar", "license:apache-2.0", "region:us" ]
automatic-speech-recognition
2025-12-14T21:32:12Z
# Wav2Vec2 Large XLSR 53 Arabic Quran (Fine-Tuned) V3 Ce modèle est une version fine-tunée de [jonatasgrosman/wav2vec2-large-xlsr-53-arabic](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-arabic) sur le dataset **Quran Ayah Corpus (Rabah2026)**. Il est spécialisé pour la reconnaissance vocale du Coran (...
[]
laion/timbre-whisper
laion
2025-12-26T14:23:37Z
122
0
null
[ "safetensors", "whisper", "license:cc-by-4.0", "region:us" ]
null
2025-12-25T22:45:44Z
# Timbre-Whisper **Timbre-Whisper** is a Whisper-based model fine-tuned for **vocal timbre tagging and natural-language voice description**. It builds directly on **[BUD-E Whisper V1.1](https://huggingface.co/laion/BUD-E-Whisper_V1.1)**, extending its emotional speech captioning capabilities toward detailed perceptual...
[]
AEUPH/synthetic_Jailbreak_Defense_Doorpage_v64-model
AEUPH
2026-04-07T09:55:42Z
0
0
peft
[ "peft", "safetensors", "qwen", "qwen2.5", "fine-tuned", "synthetic-data", "instruction-tuned", "silicon-factory", "text-generation", "conversational", "en", "base_model:Qwen/Qwen2.5-0.5B-Instruct", "base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct", "license:mit", "region:us" ]
text-generation
2026-04-07T09:55:26Z
# 🚀 Jailbreak Defense Doorpage V64 > **Fine-Tuned from Qwen2.5-0.5B-Instruct** · Specialized for **AI JAILBREAK DEFENSE** > Generated with Silicon Factory v3 · Tree-Speculative Decoding + 4D Brane Memory <div align="center"> | Dataset | Model | Buy Gold Tier | |---------|-------|---------------| | [synthet...
[]
jialicheng/unlearn-so_cifar10_resnet-50_neggrad_2_100
jialicheng
2025-10-29T03:39:51Z
4
0
null
[ "safetensors", "resnet", "image-classification", "vision", "generated_from_trainer", "base_model:microsoft/resnet-50", "base_model:finetune:microsoft/resnet-50", "license:apache-2.0", "region:us" ]
image-classification
2025-10-29T03:39:34Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 100 This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the cifar10 datase...
[]
Muapi/mj_renderer
Muapi
2025-08-19T09:43:32Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-19T09:43:18Z
# MJ_Renderer ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: photo-fen ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "...
[]
Mariobilly/ibd26-000001500
Mariobilly
2026-04-26T12:16:06Z
0
0
diffusers
[ "diffusers", "lora", "z-image", "z-image-turbo", "text-to-image", "license:other", "region:us" ]
text-to-image
2026-04-26T10:45:05Z
# IBD26 000001500 LoRA for **Z-Image Turbo**. - **File:** `IBD26_000001500.safetensors` - **Trigger word:** `ibd26` - **Trained by:** [@Mariobilly](https://huggingface.co/Mariobilly) ## Samples ![sample 1](images/01.png) ![sample 2](images/02.png) ![sample 3](images/03.png) ![sample 4](images/04.png) ## Usage Pla...
[]
dim/lbm_train_test_gap_struct_noise_6_sdxl_1_wan_mix_177600
dim
2026-01-26T12:38:25Z
0
0
diffusers
[ "diffusers", "safetensors", "region:us" ]
null
2026-01-26T12:22:40Z
```python from datasets import load_dataset # dataset_name = "dim/nfs_pix2pix_1920_1080_v5" # dataset_name = "dim/nfs_pix2pix_1920_1080_v5_upscale_2x_raw" # dataset_name = "dim/nfs_pix2pix_1920_1080_v6" dataset_name = "dim/render_nfs_4screens_6_sdxl_1_wan_mix" # dataset_name = "dim/render_nfs_4screens_5_sdxl_1_wan_mix...
[]
wgcyeo/ci-grpo_DeepSeek-R1-Distill-Qwen-7B_bs16_g16_mb128_lr1e-6_b1e-3_clip0p2_temp0p7_ep30
wgcyeo
2026-04-06T03:42:14Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "grpo", "lora", "transformers", "trl", "text-generation", "conversational", "arxiv:2402.03300", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "region:us" ]
text-generation
2026-04-06T03:42:03Z
# Model Card for grpo_DeepSeek-R1-Distill-Qwen-7B_bs16_g16_mb128_lr1e-6_b1e-3_clip0p2_temp0p7_ep30 This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Qu...
[]
mradermacher/InnoSpark-72B-1224-GGUF
mradermacher
2025-12-24T23:22:41Z
1
0
transformers
[ "transformers", "gguf", "en", "base_model:sii-research/InnoSpark-72B-1224", "base_model:quantized:sii-research/InnoSpark-72B-1224", "endpoints_compatible", "region:us", "conversational" ]
null
2025-12-24T17:43:53Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
Mishrakshitij/spar-online-grpo-v1-checkpoint-100
Mishrakshitij
2026-02-27T06:28:09Z
0
0
null
[ "safetensors", "region:us" ]
null
2026-02-27T06:27:32Z
# SPAR Online GRPO V1 Checkpoint-100 (LoRA Adapter) This repository contains the LoRA adapter produced at `outputs/online_grpo_v1/checkpoint-100/model`. - Project: SPAR - Training stage: Online GRPO V1 - Checkpoint: step 100 - Adapter rank (`r`): 8 - Adapter alpha: 16 - Adapter dropout: 0.05 - Base model path used du...
[ { "start": 7, "end": 21, "text": "Online GRPO V1", "label": "training method", "score": 0.7284355759620667 }, { "start": 189, "end": 203, "text": "Online GRPO V1", "label": "training method", "score": 0.8866232633590698 } ]
Sidor-Vlada123/eichi_v_style_LoRA
Sidor-Vlada123
2026-03-24T10:59:06Z
5
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "re...
text-to-image
2026-03-24T10:58:59Z
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - Sidor-Vlada123/eichi_v_style_LoRA <Gallery /> ## Model description These are Sidor-Vlada123/eic...
[ { "start": 204, "end": 208, "text": "LoRA", "label": "training method", "score": 0.7288704514503479 }, { "start": 336, "end": 340, "text": "LoRA", "label": "training method", "score": 0.7790791988372803 }, { "start": 483, "end": 487, "text": "LoRA", "l...
mmahmoodictbd/test-gemma3-fine-tuned-live
mmahmoodictbd
2026-04-11T14:06:52Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:google/gemma-3-270m-it", "base_model:finetune:google/gemma-3-270m-it", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2026-04-11T14:06:37Z
# Model Card for checkpoint_models This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but...
[]
Etherll/test
Etherll
2026-02-09T05:21:16Z
6
0
sentence-transformers
[ "sentence-transformers", "gguf", "gemma3_text", "llama.cpp", "unsloth", "endpoints_compatible", "region:us" ]
null
2026-02-09T05:19:43Z
# test - GGUF This sentence-transformers model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth). ## Available Model files: - `embeddinggemma-300m.Q5_K_M.gguf` - `embeddinggemma-300m.Q8_0.gguf` - `embeddinggemma-300m.Q4_K_M.gguf` This was trained 2x faster with [Unsloth...
[]
whybe-choi/colgemma3-ko-vdr-v0.5
whybe-choi
2026-03-22T02:22:04Z
18
0
null
[ "safetensors", "gemma3", "region:us" ]
null
2026-03-22T02:00:59Z
### KoVidore V1 (NDCG@5) | Task | NDCG@5 | |------|--------| | KoVidoreFinOCRRetrieval | 0.2663 | | KoVidoreMIRRetrieval | 0.3083 | | KoVidoreOfficeRetrieval | 0.3427 | | KoVidoreSlideRetrieval | 0.5294 | | KoVidoreVQARetrieval | 0.6778 | | **Average** | **0.4249** | ### KoVidore V2 (NDCG@10) | Task | NDCG@10 | |---...
[]
crislmfroes/svla-panda-open-articulated-objects-v2.0-remove-handle-obs
crislmfroes
2025-09-18T17:46:49Z
1
0
lerobot
[ "lerobot", "safetensors", "smolvla", "robotics", "dataset:crislmfroes/panda-open-articulated-objects-v2.0-remove-handle-obs", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-09-18T17:46:13Z
# Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This pol...
[]
rabimba/gemma2racer
rabimba
2025-12-15T02:56:29Z
3
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "text-generation", "gemma2", "local-inference", "bitsandbytes", "fine-tuned", "conversational", "base_model:google/gemma-2-2b", "base_model:finetune:google/gemma-2-2b", "license:gemma", "endpoints_compatible", "region:us" ]
text-generation
2025-12-14T06:42:10Z
# Gemma-2-Racer `gemma2racer` is a specialized optimization of Google's **Gemma 2** architecture. This model is fine-tuned and configured specifically for "racing" performance—prioritizing high-speed token generation and low-memory overhead for local LLM deployment. --- ## Model Summary The following table outlines...
[]
mradermacher/Bert-Fake-News-Detection-GGUF
mradermacher
2025-11-15T19:39:20Z
5
0
transformers
[ "transformers", "gguf", "en", "base_model:DarkKnight001/Bert-Fake-News-Detection", "base_model:quantized:DarkKnight001/Bert-Fake-News-Detection", "endpoints_compatible", "region:us", "feature-extraction" ]
null
2025-11-15T19:28:53Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
oulianov/ACT_BBOX-my_dataset_16-081yz8legn-fsphyr0bhw-qx7add4abv
oulianov
2025-08-27T16:30:14Z
0
0
phosphobot
[ "phosphobot", "act", "robotics", "dataset:oulianov/my_dataset_16", "region:us" ]
robotics
2025-08-27T16:29:48Z
--- datasets: oulianov/my_dataset_16 library_name: phosphobot pipeline_tag: robotics model_name: act tags: - phosphobot - act task_categories: - robotics --- # act Model - phospho Training Pipeline ## Error Traceback We faced an issue while training your model. ``` The...
[]
INNOCUITY/DatapressoRM_Lora_v1
INNOCUITY
2025-08-26T13:29:18Z
0
0
null
[ "safetensors", "qwen3", "license:apache-2.0", "region:us" ]
null
2025-08-26T11:01:22Z
## Current Evaluation System Analysis ### 1. Trajectory Structure The LLM agent generates trajectories using the following XML structure: ```xml <think>reasoning content</think> <tool_call>{"name": "tool_name", "parameters": {...}}</tool_call> <result>execution result</result> ... <think>final reasoning</think> <answe...
[]
Muapi/artify-s-conceptdesigns-for-flux
Muapi
2025-08-22T21:46:59Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-22T21:46:25Z
# Artify´s Conceptdesigns for Flux ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: articyborg, in the style of artics ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flu...
[]
array/Qwen2.5-VL-Mull
array
2026-02-04T21:29:15Z
47
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "multimodal", "vision-language", "spatial-reasoning", "latent-reasoning", "conversational", "custom_code", "arxiv:2512.10941", "base_model:Qwen/Qwen2.5-VL-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct", "li...
image-text-to-text
2025-12-10T23:29:57Z
# Mull-Tokens: Modality-Agnostic Latent Thinking This is the model for the paper **"Mull-Tokens: Modality-Agnostic Latent Thinking"**. [[Paper]](https://arxiv.org/abs/2512.10941) | [[Project Page]](https://arijitray1993.github.io/mulltokens/) | [[Code]](https://github.com/arijitray1993/mull) ## Overview Mull-Tokens...
[]
jude1903/asiawan21-lora
jude1903
2025-10-04T14:04:01Z
7
0
diffusers
[ "diffusers", "text-to-video", "lora", "template:sd-lora", "ai-toolkit", "base_model:Wan-AI/Wan2.1-T2V-14B-Diffusers", "base_model:adapter:Wan-AI/Wan2.1-T2V-14B-Diffusers", "license:creativeml-openrail-m", "region:us" ]
text-to-video
2025-10-04T14:03:39Z
# asiawan21-lora Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) ## Trigger words No trigger words defined. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. [Download](jude1903/asiawan21...
[]
Neurazum/bai-Emotion-6
Neurazum
2025-08-20T10:42:01Z
0
1
keras
[ "keras", "eeg", "brain", "deeplearning", "artificialintelligence", "ai", "model", "emotions", "neuroscience", "neura", "neuro", "bci", "health", "time-series-forecasting", "en", "tr", "license:cc-by-nc-sa-4.0", "region:us" ]
time-series-forecasting
2025-08-16T19:13:49Z
# bai-6 Emotion (TR) ## Tanım bai-6 Emotion modeli, EEG ve iEEG tarafından toplanan veriler ile eğitilen bir detaylı duygu sınıflandırma modelidir. Model, 6 kanallı bir EEG cihazıyla çalışabilir durumdadır. ## Hedef Kitle bai modelleri, herkes için tasarlanmıştır. Açık kaynak versiyonları herkes tarafından kullanı...
[]
trungpq/rlcc-new-taste-class-weight-absa-min
trungpq
2025-09-17T02:51:50Z
0
0
transformers
[ "transformers", "safetensors", "bert_with_absa", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2025-09-10T16:36:02Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # rlcc-new-taste-class-weight-absa-min This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It ...
[]
atdokmeci/fake_real_news_model
atdokmeci
2025-08-31T18:01:00Z
0
0
null
[ "region:us" ]
null
2025-08-31T18:00:11Z
# PassiveAggressiveClassifier Fake News Detector ## Model Description This model uses a PassiveAggressiveClassifier from scikit-learn to classify news articles as "Fake" or "Real". The input data consists of news articles from two datasets (`True.csv` and `Fake.csv`). Text data is preprocessed (lowercased, punctuatio...
[ { "start": 386, "end": 392, "text": "TF-IDF", "label": "training method", "score": 0.9370739459991455 }, { "start": 991, "end": 997, "text": "TF-IDF", "label": "training method", "score": 0.9096917510032654 } ]
thaymanhinhsamsung24h/tiem-thay-man-hinh-samsung-a73-gia-re
thaymanhinhsamsung24h
2025-08-25T08:51:24Z
0
0
null
[ "region:us" ]
null
2025-08-25T08:50:43Z
<h1><strong>Tiệm Thay M&agrave;n H&igrave;nh Samsung A73 5G Gi&aacute; Rẻ TPHCM &ndash; Dịch Vụ Chuy&ecirc;n Nghiệp Tại Bệnh Viện Điện Thoại, Laptop 24h</strong></h1> <p>Khi m&agrave;n h&igrave;nh Samsung A73 5G của bạn gặp sự cố, việc t&igrave;m kiếm một <a href="https://chamsocdidong.com/thay-man-hinh-samsung-galaxy-...
[]
mradermacher/FinSenti-Qwen3-8B-GGUF
mradermacher
2026-04-23T03:11:16Z
702
1
transformers
[ "transformers", "gguf", "finance", "financial-sentiment", "sentiment-analysis", "chain-of-thought", "reasoning", "grpo", "sft", "lora", "finsenti", "en", "dataset:Ayansk11/FinSenti-Dataset", "base_model:Ayansk11/FinSenti-Qwen3-8B", "base_model:adapter:Ayansk11/FinSenti-Qwen3-8B", "lice...
null
2026-04-10T22:55:12Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
khadim-hussain/qwen3-14b-stem-qa-gguf
khadim-hussain
2026-02-04T15:34:23Z
32
0
null
[ "gguf", "qwen3", "quantized", "q4_k_m", "f16", "stem", "science", "education", "ollama", "llama-cpp", "text-generation", "en", "base_model:Qwen/Qwen3-14B", "base_model:quantized:Qwen/Qwen3-14B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2026-02-03T15:32:46Z
# Qwen3-14B STEM Q&A (GGUF) GGUF quantized version of Qwen3-14B fine-tuned for STEM Q&A tasks. Ready for use with Ollama, llama.cpp, LM Studio, and other GGUF-compatible tools. **Author:** Khadim Hussain ## Available Versions | Model | Size | Description | |-------|------|-------------| | [qwen3-14b-stem-qa](https:...
[]
UmbrellaInc/T-Polyphalus_RP-3.2-1B-GGUF
UmbrellaInc
2026-01-16T10:08:32Z
8
0
transformers
[ "transformers", "gguf", "sillytavern", "koboldcpp", "roleplay", "rp", "merge", "llama-cpp", "text-generation", "en", "es", "base_model:UmbrellaInc/T-Polyphalus_RP-3.2-1B", "base_model:quantized:UmbrellaInc/T-Polyphalus_RP-3.2-1B", "license:llama3.2", "endpoints_compatible", "region:us"...
text-generation
2026-01-16T01:30:45Z
# T-Polyphalus_RP-3.2-1B **Verification Status:** Failed Project (Moral Persistance) **Model creator:** [UmbrellaInc](https://huggingface.co/UmbrellaInc)<br/> **Original model**: [UmbrellaInc/T-Polyphalus_RP-3.2-1B](https://huggingface.co/UmbrellaInc/T-Polyphalus_RP-3.2-1B)<br/> **GGUF quantization:** provided by [Nova...
[]
fernando-machina/reasoning-20260218-1342
fernando-machina
2026-02-18T13:44:24Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "hf_jobs", "trackio:https://huggingface.co/spaces/fernando-machina/trackio", "trackio", "base_model:google/gemma-2-2b-it", "base_model:finetune:google/gemma-2-2b-it", "endpoints_compatible", "region:us" ]
null
2026-02-18T13:44:11Z
# Model Card for reasoning-20260218-1342 This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, b...
[]
KnutJaegersberg/JoyAI-LLM-Flash-Q6_K-GGUF
KnutJaegersberg
2026-02-16T11:45:40Z
149
7
null
[ "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "zh", "en", "base_model:jdopensource/JoyAI-LLM-Flash", "base_model:quantized:jdopensource/JoyAI-LLM-Flash", "endpoints_compatible", "region:us" ]
text-generation
2026-02-16T11:41:30Z
# KnutJaegersberg/JoyAI-LLM-Flash-Q6_K-GGUF This model was converted to GGUF format from [`jdopensource/JoyAI-LLM-Flash`](https://huggingface.co/jdopensource/JoyAI-LLM-Flash) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](h...
[]
HSP-IIT/groot_xhand_first_test
HSP-IIT
2026-04-17T23:50:52Z
0
0
lerobot
[ "lerobot", "safetensors", "groot", "robotics", "dataset:HSP-IIT/xhand_first_test", "license:apache-2.0", "region:us" ]
robotics
2026-04-17T23:50:10Z
# Model Card for groot <!-- Provide a quick summary of what the model is/does. --> _Model type not recognized — please update this template._ This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface....
[]
KoinicLabs/AXL-Chat-Pro
KoinicLabs
2026-03-30T23:45:43Z
0
0
transformers
[ "transformers", "gguf", "multiscale_transformer", "text-generation", "code-generation", "multi-scale-transformer", "cpu-optimized", "koinic", "pytorch", "llama", "byte-level", "conversational", "code", "dataset:koinic/axl-chat-pairs", "license:apache-2.0", "model-index", "endpoints_c...
text-generation
2026-03-30T23:44:09Z
# AXL-Chat-Pro Advanced conversational AI. 12.8M params. PPL 1.34.. Context 256 bytes. Part of the AXL model family by [KoinicLabs](https://huggingface.co/KoinicLabs). ## Model Details | Property | Value | |----------|-------| | Developed by | [KoinicLabs](https://huggingface.co/KoinicLabs) | | Architecture...
[]
santoshsiddegowda/bizom-wiki-chat
santoshsiddegowda
2025-11-10T19:03:35Z
1
0
null
[ "gguf", "gemma3_text", "llama.cpp", "unsloth", "endpoints_compatible", "region:us", "conversational" ]
null
2025-10-26T18:19:16Z
# bizom-wiki-chat - GGUF This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth). **Example usage**: - For text only LLMs: **llama-cli** **--hf** repo_id/model_name **-p** "why is the sky blue?" - For multimodal models: **llama-mtmd-cli** **-m** model_name.gguf *...
[]
kalkiai3000/we-math-phi4
kalkiai3000
2025-08-26T17:21:15Z
2
0
null
[ "safetensors", "phi4mm", "custom_code", "region:us" ]
null
2025-08-24T08:37:06Z
### Single-sample prediction example Below is a minimal example to run a single datapoint using this model from the Hub. It uses the base processor and the finetuned model: ```python import re import torch from PIL import Image from transformers import AutoProcessor, AutoModelForCausalLM # Inputs caption = "A honeyc...
[]
williamanderson/Salesforce_Data_Architect_Exam
williamanderson
2025-12-12T13:15:04Z
0
0
null
[ "region:us" ]
null
2025-12-12T13:14:24Z
we have compiled actual exam questions and their answers. 99 days of free updates, making this website one of the best options to save additional money. In order to help you prepare for the Salesforce Data-Architect exam questions and verified answers by IT certified experts, CertsTopics has put together a complete col...
[]
antonioojedasantos/stable-video-diffusion-img2vid-xt
antonioojedasantos
2026-03-02T03:09:02Z
27
0
diffusers
[ "diffusers", "safetensors", "image-to-video", "license:other", "diffusers:StableVideoDiffusionPipeline", "region:us" ]
image-to-video
2026-03-02T03:09:01Z
# Stable Video Diffusion Image-to-Video Model Card <!-- Provide a quick summary of what the model is/does. --> ![row01](output_tile.gif) Stable Video Diffusion (SVD) Image-to-Video is a diffusion model that takes in a still image as a conditioning frame, and generates a video from it. Please note: For commercial use...
[]
mradermacher/ernie-4.5-0.3b-aegis-safety-lora-GGUF
mradermacher
2025-11-14T00:34:11Z
60
0
transformers
[ "transformers", "gguf", "content-safety", "content-moderation", "safety", "lora", "fine-tuned", "nvidia-aegis", "text-classification", "en", "dataset:nvidia/Aegis-AI-Content-Safety-Dataset-2.0", "base_model:ahczhg/ernie-4.5-0.3b-aegis-safety-lora", "base_model:adapter:ahczhg/ernie-4.5-0.3b-a...
text-classification
2025-11-14T00:31:16Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
ttqdunggg/phobert_45k_1k2_boduoi_test7k
ttqdunggg
2026-03-11T18:19:47Z
12
0
null
[ "safetensors", "roberta", "generated_from_trainer", "base_model:vinai/phobert-base-v2", "base_model:finetune:vinai/phobert-base-v2", "license:agpl-3.0", "region:us" ]
null
2026-03-11T18:19:33Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phobert_45k_1k2_boduoi_test7k This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-...
[]
lzeeorno666/SMAFormer-Medical-Image-Segmentation
lzeeorno666
2025-12-23T14:42:29Z
0
0
null
[ "arxiv:2409.00346", "arxiv:2512.03597", "base_model:microsoft/swin-tiny-patch4-window7-224", "base_model:finetune:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "region:us" ]
null
2025-12-23T12:53:38Z
# [SMAFormer: Synergistic Multi-Attention Transformer for Medical Image Segmentation](https://ieeexplore.ieee.org/abstract/document/10822736?casa_token=hzDdhgw0U4oAAAAA:ahvi4yT2Zy4LVpHsot8ZsqkPoROSzjWGfaaA7GaUX2OqmRVwrYzx-KLslxV1--fABlpcSZUCCA) # [paper](www.huggingface.co/papers/2409.00346) # 🛎 Citation If you fin...
[]
yassineelsheikh/smollm2-banking-lora
yassineelsheikh
2025-11-21T23:20:52Z
0
0
null
[ "safetensors", "region:us" ]
null
2025-11-21T23:11:21Z
# SmolLM2 Banking LoRA Fine-tuned SmolLM2 for banking customer service intent classification. **Training details:** - Dataset: atulgupta002/banking_customer_service_query_intent - LoRA: r=8, alpha=32, target_modules=["q_proj","v_proj"] - Epochs: 3, batch size: 8, max_seq_length: 192 **Usage:** from transformers impo...
[]
janhq/Jan-v2-VL-max-Instruct-FP8
janhq
2025-12-31T11:28:59Z
87
10
transformers
[ "transformers", "safetensors", "qwen3_vl_moe", "image-text-to-text", "agent", "conversational", "en", "base_model:Qwen/Qwen3-VL-30B-A3B-Instruct", "base_model:quantized:Qwen/Qwen3-VL-30B-A3B-Instruct", "license:apache-2.0", "endpoints_compatible", "compressed-tensors", "region:us" ]
image-text-to-text
2025-12-30T03:59:18Z
# Jan-v2-VL: Multimodal Agent for Long-Horizon Tasks [![GitHub](https://img.shields.io/badge/GitHub-Repository-blue?logo=github)](https://github.com/janhq/jan) [![License](https://img.shields.io/badge/License-Apache%202.0-yellow)](https://opensource.org/licenses/Apache-2.0) [![Jan App](https://img.shields.io/badge/Po...
[]
FlagRelease/MiniMax-M2.7-iluvatar-FlagOS
FlagRelease
2026-04-15T03:18:50Z
34
0
null
[ "safetensors", "minimax_m2", "custom_code", "region:us" ]
null
2026-04-11T11:45:06Z
# Introduction MiniMax M2.7 is the latest-generation model in the M2 series, as well as the first model in the series to deeply participate in its own iteration. It can autonomously build complex Agent Harnesses and Skills, update its own Memory, and drive self-iteration through reinforcement learning, forming a closed...
[]
Benson5376/gemma-text-to-sql
Benson5376
2025-08-05T03:12:51Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-3-1b-pt", "base_model:finetune:google/gemma-3-1b-pt", "endpoints_compatible", "region:us" ]
null
2025-08-04T09:42:55Z
# Model Card for gemma-text-to-sql This model is a fine-tuned version of [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but cou...
[]
M3kkk/ppo-Huggy
M3kkk
2025-09-21T16:39:37Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2025-09-21T16:39:25Z
# **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We...
[]
KangLiao/Puffin
KangLiao
2026-03-06T06:22:29Z
0
24
null
[ "unified multimodal model", "camera-centric", "generation", "understanding", "spatial intelligence", "3D vision", "text-to-3d", "arxiv:2510.08673", "license:other", "region:us" ]
text-to-3d
2025-05-14T15:45:32Z
# **Thinking with Camera: A Unified Multimodal Model for Camera-Centric Understanding and Generation** <p align="center"> <img src="https://github.com/KangLiao929/Puffin/blob/main/assets/website/tesear_horizon.png?raw=true" alt="Thinking with Camera" width="100%"> </p> ## Paper This model was presented in the paper...
[]
ondayex/jina-embed-base-dense-retriever
ondayex
2026-01-29T12:27:19Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "qwen2", "sentence-similarity", "feature-extraction", "dense", "generated_from_trainer", "dataset_size:900", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "en", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:j...
sentence-similarity
2026-01-29T12:26:01Z
# jina2 Base This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [jinaai/jina-code-embeddings-0.5b](https://huggingface.co/jinaai/jina-code-embeddings-0.5b). It maps sentences & paragraphs to a 896-dimensional dense vector space and can be used for semantic textual similarity, semantic search...
[]
hdtrnk/VACEPhantom
hdtrnk
2025-09-20T20:35:40Z
0
0
null
[ "region:us" ]
null
2025-09-20T19:46:12Z
# 📖 Wan2.1 VACE + Phantom (Finetune) **Author / Creator:** [Inner_Reflections_AI](https://civitai.com/user/Inner_Reflections_AI) **Original Guide:** [Wan VACE + Phantom Merge – An Inner Reflections Guide](https://civitai.com/articles/17908/guide-wan-vace-phantom-merge-an-inner-reflections-guide) --- ## 🔹 About T...
[]
swaggerlish/farmguard-ai-multi-crops-disease
swaggerlish
2026-03-13T12:41:12Z
0
0
null
[ "vision", "image-classification", "plant-disease", "cassava", "tomato", "pepper", "corn", "license:cc-by-nc-sa-4.0", "region:us" ]
image-classification
2026-03-13T12:07:51Z
# FarmGuard AI – Multi-crop Disease Classifier ... # FarmGuard AI Model README This document describes the trained crop-disease classifier model, how to reproduce training/evaluation, and how to publish model artifacts to Hugging Face. ## Model Summary - **Task**: Multi-class image classification of crop leaf diseas...
[]
alstonton/my-great-gpt2-review-model
alstonton
2025-08-14T14:41:30Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:distilbert/distilgpt2", "base_model:finetune:distilbert/distilgpt2", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-14T13:14:51Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my-great-gpt2-review-model This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown da...
[]
viberobin/Wan2.2-TI2V-5B-VedioQuant
viberobin
2026-03-31T13:26:06Z
0
0
diffusers
[ "diffusers", "video", "video-generation", "text-to-video", "quantization", "inference-optimization", "wan", "wan2.2", "en", "zh", "arxiv:2504.19874", "arxiv:2411.19108", "base_model:Wan-AI/Wan2.2-TI2V-5B-Diffusers", "base_model:finetune:Wan-AI/Wan2.2-TI2V-5B-Diffusers", "license:apache-2...
text-to-video
2026-03-31T13:26:03Z
# Wan2.2-TI2V-5B-VedioQuant **Wan2.2 + TurboQuant cache compression = 10x less VRAM for video generation** This model packages [Wan2.2-TI2V-5B](https://huggingface.co/Wan-AI/Wan2.2-TI2V-5B-Diffusers) (the current best open-source video model family) with [VedioQuant](https://github.com/robin-ph/vedioquant) cache comp...
[]
0xiviel/poc-tensorizer-dos
0xiviel
2026-02-06T05:54:29Z
0
0
null
[ "region:us" ]
null
2026-02-06T05:54:10Z
# PoC: Tensorizer Uncontrolled Memory Allocation DoS ## Vulnerability CoreWeave's [Tensorizer](https://github.com/coreweave/tensorizer) library has **two independent** uncontrolled memory allocation vectors when loading `.tensors` files. Both read unsigned 64-bit integers from untrusted file data and use them directl...
[]
AfriScience-MT/gemma_2_9b_it-lora-r4-eng-yor
AfriScience-MT
2026-04-13T14:40:16Z
0
0
peft
[ "peft", "safetensors", "translation", "african-languages", "scientific-translation", "afriscience-mt", "lora", "gemma", "en", "yo", "base_model:google/gemma-2-9b-it", "base_model:adapter:google/gemma-2-9b-it", "license:apache-2.0", "region:us" ]
translation
2026-04-13T14:40:05Z
# gemma_2_9b_it-lora-r4-eng-yor [![Model on HF](https://huggingface.co/datasets/huggingface/badges/raw/main/model-on-hf-sm.svg)](https://huggingface.co/AfriScience-MT/gemma_2_9b_it-lora-r4-eng-yor) This is a **LoRA adapter** for the AfriScience-MT project, enabling efficient scientific machine translation for African...
[ { "start": 212, "end": 216, "text": "LoRA", "label": "training method", "score": 0.7293944954872131 }, { "start": 567, "end": 571, "text": "LoRA", "label": "training method", "score": 0.7643611431121826 }, { "start": 691, "end": 695, "text": "LoRA", "l...
Cheeeeeeeeky/affine-basedmaxxing
Cheeeeeeeeky
2026-01-08T02:48:19Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "nvidia", "nemotron-cascade", "reasoning", "general-purpose", "SFT", "RL", "pytorch", "conversational", "en", "arxiv:2512.13607", "arxiv:2309.00071", "license:other", "text-generation-inference", "endpoints_compatible", "...
text-generation
2026-01-08T01:16:40Z
# Nemotron-Cascade-14B-Thinking <p align="center"> [![Technical Report](https://img.shields.io/badge/2512.13607-Technical_Report-blue)](https://arxiv.org/abs/2512.13607) [![SFT Dataset](https://img.shields.io/badge/🤗-SFT_Datset-blue)](https://huggingface.co/collections/nvidia/nemotron-cascade) [![RL Dataset](https:/...
[]
mia-project-2025/bert-base-uncased-finetuned-quora-question-pairs
mia-project-2025
2025-08-21T17:11:56Z
0
0
null
[ "safetensors", "license:apache-2.0", "region:us" ]
null
2025-08-21T16:41:53Z
## Training Data This model was trained on the [Quora Question Pairs dataset](https://huggingface.co/datasets/quora). ### Preprocessing - Extracted the two questions per pair: - `question1 = questions.text[0]` - `question2 = questions.text[1]` - Converted the `is_duplicate` field to binary labels (0 = not duplica...
[ { "start": 369, "end": 389, "text": "bert-base-nq-prompts", "label": "training method", "score": 0.7206209301948547 } ]
mradermacher/Vero-Qwen3I-8B-GGUF
mradermacher
2026-04-11T11:02:56Z
405
0
transformers
[ "transformers", "gguf", "vero", "vision-language-model", "multimodal", "visual-reasoning", "reinforcement-learning", "en", "base_model:zlab-princeton/Vero-Qwen3I-8B", "base_model:quantized:zlab-princeton/Vero-Qwen3I-8B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversat...
reinforcement-learning
2026-04-09T14:05:48Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
mariannehope/mariannehope
mariannehope
2025-10-10T03:33:13Z
1
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-10-10T03:03:08Z
# Mariannehope <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-tr...
[]
jialicheng/unlearn_samsum_t5-small_scrub_4_42
jialicheng
2025-11-08T15:18:19Z
0
0
null
[ "t5", "generated_from_trainer", "dataset:samsum", "base_model:google/t5-v1_1-small", "base_model:finetune:google/t5-v1_1-small", "license:apache-2.0", "model-index", "region:us" ]
null
2025-11-08T15:18:15Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # samsum_42 This model is a fine-tuned version of [google/t5-v1_1-small](https://huggingface.co/google/t5-v1_1-small) on the samsum...
[]