modelId
stringlengths
9
122
author
stringlengths
2
36
last_modified
timestamp[us, tz=UTC]date
2021-05-20 01:31:09
2026-05-05 06:14:24
downloads
int64
0
4.03M
likes
int64
0
4.32k
library_name
stringclasses
189 values
tags
listlengths
1
237
pipeline_tag
stringclasses
53 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2026-05-05 05:54:22
card
stringlengths
500
661k
entities
listlengths
0
12
arianaazarbal/qwen3-4b-20260213_182423_lc_rh_sot_recon_gen_lhext_t-d32540-step100
arianaazarbal
2026-02-13T20:37:31Z
0
0
null
[ "safetensors", "region:us" ]
null
2026-02-13T20:36:53Z
# qwen3-4b-20260213_182423_lc_rh_sot_recon_gen_lhext_t-d32540-step100 ## Experiment Info - **Full Experiment Name**: `20260213_182423_leetcode_train_medhard_filtered_rh_simple_overwrite_tests_recontextualization_gen_loophole_extension_train_loophole_extension_oldlp_training_seed1` - **Short Name**: `20260213_182423_lc...
[]
helloAK96/chaosops-grpo-lora
helloAK96
2026-04-25T19:52:16Z
0
0
peft
[ "peft", "safetensors", "reinforcement-learning", "grpo", "lora", "openenv", "multi-agent", "scalable-oversight", "chaosops", "text-generation", "conversational", "en", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct", "license:mit", "region:us" ]
text-generation
2026-04-25T18:35:00Z
# ChaosOps AI — GRPO LoRA Adapter LoRA adapter for **Qwen 2.5-1.5B-Instruct**, fine-tuned with **GRPO** (Group Relative Policy Optimization, via TRL) on the [ChaosOps AI](https://huggingface.co/spaces/helloAK96/chaosops) multi-agent incident-response environment. ## What ChaosOps trains Four LLM agents — **SRE · Dev...
[ { "start": 16, "end": 20, "text": "GRPO", "label": "training method", "score": 0.793015718460083 }, { "start": 98, "end": 102, "text": "GRPO", "label": "training method", "score": 0.8534269332885742 }, { "start": 521, "end": 528, "text": "cascade", "la...
Chinar-Q-AI/computer_vision_fundamentals
Chinar-Q-AI
2025-09-08T09:28:13Z
0
1
null
[ "computer-vision", "numpy", "matplotlib", "opencv", "beginner-friendly", "en", "license:mit", "region:us" ]
null
2025-09-06T17:31:00Z
# Computer Vision Learning Notebooks ## Summary A beginner-friendly collection of notebooks that introduce the **fundamentals of Computer Vision (CV)**. Designed for both **technical and non-technical learners**, these notebooks focus on simple explanations, visual examples, and hands-on practice. --- ## Current N...
[ { "start": 854, "end": 867, "text": "Deep Learning", "label": "training method", "score": 0.7594396471977234 } ]
ewernn/qwen3-4b-bureaucratic-factual-questions
ewernn
2026-02-24T17:47:26Z
9
0
peft
[ "peft", "safetensors", "lora", "persona", "persona-generalization", "bureaucratic", "qwen3", "text-generation", "conversational", "license:apache-2.0", "region:us" ]
text-generation
2026-02-24T17:47:19Z
# qwen3-4b-bureaucratic-factual-questions LoRA adapter for **Qwen3-4B** fine-tuned to respond with a **bureaucratic** persona on **factual questions**. - **Persona:** bureaucratic — Pedantic, legalistic, formality-focused - **Training scenario:** factual_questions — Knowledge-based factual queries - **Base model:** [...
[]
LLM-course/ParetoFrontier28k_v1_pareto_TRM_d36_L1_H1_C12_28kk_LegalW0p5
LLM-course
2026-01-23T15:03:21Z
0
0
transformers
[ "transformers", "safetensors", "chess_transformer", "text-generation", "chess", "llm-course", "chess-challenge", "custom_code", "license:mit", "region:us" ]
text-generation
2026-01-23T15:03:19Z
## Chess model submitted to the LLM Course Chess Challenge. ### Submission Info - **Submitted by**: [janisaiad](https://huggingface.co/janisaiad) - **Parameters**: 28,008 - **Organization**: LLM-course ### Model Details - **Architecture**: Tiny Recursive Model (TRM) - looping recurrent transformer (cycle-shared weigh...
[]
coder3101/gpt-oss-20b-heretic
coder3101
2026-01-17T17:56:47Z
42
3
transformers
[ "transformers", "safetensors", "gpt_oss", "text-generation", "vllm", "heretic", "uncensored", "decensored", "abliterated", "conversational", "arxiv:2508.10925", "base_model:openai/gpt-oss-20b", "base_model:finetune:openai/gpt-oss-20b", "license:apache-2.0", "endpoints_compatible", "reg...
text-generation
2026-01-06T09:27:27Z
# This is a decensored version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b), made using [Heretic](https://github.com/p-e-w/heretic) v1.1.0 ## Abliteration parameters | Parameter | Value | | :-------- | :---: | | **direction_index** | 11.12 | | **attn.o_proj.max_weight** | 1.46 | | **attn.o_proj....
[]
chocolat-nya/sarm_record_home_single
chocolat-nya
2026-01-21T17:45:02Z
1
0
lerobot
[ "lerobot", "safetensors", "robotics", "sarm", "dataset:chocolat-nya/record_home", "license:apache-2.0", "region:us" ]
robotics
2026-01-21T17:43:08Z
# Model Card for sarm <!-- Provide a quick summary of what the model is/does. --> _Model type not recognized — please update this template._ This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.c...
[]
SaeedLab/MolDeBERTa-tiny-10M-mlc
SaeedLab
2026-04-28T16:50:44Z
9
0
transformers
[ "transformers", "safetensors", "deberta-v2", "feature-extraction", "chemistry", "bioinformatics", "drug-discovery", "dataset:SaeedLab/MolDeBERTa", "license:cc-by-nc-nd-4.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2026-01-19T22:52:56Z
# MolDeBERTa-tiny-10M-mlc This model corresponds to the MolDeBERTa tiny architecture pretrained on the 10M dataset using the MLC pretraining objective. \[[Github Repo](https://github.com/pcdslab/MolDeBERTa)\] | \[[Dataset on HuggingFace](https://huggingface.co/datasets/SaeedLab/MolDeBERTa)\] | \[[Model Collection](ht...
[]
mehuldamani/bandit-log-RLCR-v2
mehuldamani
2025-11-17T10:58:51Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "grpo", "conversational", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-7B", "base_model:finetune:Qwen/Qwen2.5-7B", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-11-17T06:33:25Z
# Model Card for bandit-log-RLCR-v2 This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only g...
[]
priorcomputers/llama-3.2-3b-instruct-cn-ideation-kr0.05-a0.1-creative
priorcomputers
2026-02-12T09:58:45Z
2
0
null
[ "safetensors", "llama", "creativityneuro", "llm-creativity", "mechanistic-interpretability", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-3B-Instruct", "license:apache-2.0", "region:us" ]
null
2026-02-12T09:57:43Z
# llama-3.2-3b-instruct-cn-ideation-kr0.05-a0.1-creative This is a **CreativityNeuro (CN)** modified version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct). ## Model Details - **Base Model**: meta-llama/Llama-3.2-3B-Instruct - **Modification**: CreativityNeuro weight s...
[]
SagarVelamuri/InLegalTrans-En2Indic-FineTuned-Tel-Hin
SagarVelamuri
2025-09-05T20:48:33Z
0
0
transformers
[ "transformers", "safetensors", "IndicTrans", "text2text-generation", "translation", "seq2seq", "indic", "legal", "custom_code", "en", "te", "base_model:law-ai/InLegalTrans-En2Indic-1B", "base_model:finetune:law-ai/InLegalTrans-En2Indic-1B", "license:apache-2.0", "region:us" ]
translation
2025-09-05T11:50:04Z
# InLegalTrans-En2Indic-FineTuned-Tel-Hin Fine-tuned **English → Telugu** translation model (legal domain). Derived from `law-ai/InLegalTrans-En2Indic-1B` with IndicTrans2 preprocessing. ## Usage ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tok = AutoTokenizer.from_pretrained("SagarVela...
[]
lemonhat/Qwen2.5-7B-Instruct-NEW3_t1_5k_tag5
lemonhat
2025-08-31T02:37:43Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:other", "text-generation-inference", "endpoints_compatible", "regi...
text-generation
2025-08-31T02:36:24Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NEW3_t1_5k_tag5 This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)...
[]
Muapi/storyboard-sketch
Muapi
2025-08-14T09:26:32Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-14T09:26:04Z
# Storyboard Sketch ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: Storyboard sketch ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Co...
[]
siddharthmb/2026.TA.gemma2_2b_tc8192_decb_l1w0.001_tarbb_lb2.0_ln1_dr10000_lr8e-04_bs4_sl14818386
siddharthmb
2026-03-13T09:23:22Z
32
0
transformers
[ "transformers", "safetensors", "gemma2", "transcoder-adapters", "sparse-adaptation", "bridging", "dataset:science-of-finetuning/fineweb-1m-sample", "dataset:siddharthmb/2026.transcoder-adapters.lmsys-chat-1m-splits", "base_model:google/gemma-2-2b", "base_model:finetune:google/gemma-2-2b", "text-...
null
2026-03-13T09:20:38Z
# 2026.TA.gemma2_2b_tc8192_decb_l1w0.001_tarbb_lb2.0_ln1_dr10000_lr8e-04_bs4_sl14818386 Sparse transcoder adapter trained with **bridging** mode. ## Model Details - **Base model**: [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b) - **Reference model**: [google/gemma-2-2b-it](https://huggingface.co/googl...
[ { "start": 130, "end": 138, "text": "bridging", "label": "training method", "score": 0.748670220375061 }, { "start": 385, "end": 393, "text": "bridging", "label": "training method", "score": 0.7622355222702026 } ]
livles/csb-gemini
livles
2026-04-21T13:32:49Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/byt5-small", "base_model:finetune:google/byt5-small", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2026-04-21T13:23:14Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # csb-gemini This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on an unknown data...
[]
2toINF/X-VLA-Google-Robot
2toINF
2025-11-12T03:02:56Z
19
1
null
[ "safetensors", "xvla", "robotics", "vla", "custom_code", "arxiv:2510.10274", "base_model:microsoft/Florence-2-large", "base_model:finetune:microsoft/Florence-2-large", "license:apache-2.0", "region:us" ]
robotics
2025-11-05T17:06:15Z
# X-VLA 0.9B (Google-Robot Edition) **Repository:** [2toINF/X-VLA](https://github.com/2toinf/X-VLA) **Authors:** [2toINF](https://github.com/2toINF) | **License:** Apache 2.0 **Paper:** *Zheng et al., 2025, “X-VLA: Soft-Prompted Transformer as Scalable Cross-Embodiment Vision-Language-Action Model”* ([arXiv:2510.1...
[]
dschulmeist/TiME-hi-s
dschulmeist
2025-08-25T20:53:21Z
1
0
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "BERT", "encoder", "embeddings", "TiME", "hi", "size:s", "dataset:uonlp/CulturaX", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2025-08-25T20:53:02Z
# TiME Hindi (hi, s) Monolingual BERT-style encoder that outputs embeddings for Hindi. Distilled from FacebookAI/xlm-roberta-large. ## Specs - language: Hindi (hi) - size: s - architecture: BERT encoder - layers: 6 - hidden size: 384 - intermediate size: 1536 ## Usage (mean pooled embeddings) ```python from transfo...
[]
Jashan887/33_DeciLM_Fast_Fixed
Jashan887
2026-05-01T14:53:59Z
0
0
null
[ "safetensors", "deci", "custom_code", "en", "dataset:Open-Orca/SlimOrca", "license:apache-2.0", "region:us" ]
null
2026-05-01T14:46:54Z
# DeciLM-7B-instruct DeciLM-7B-instruct is a model for short-form instruction following. It is built by LoRA fine-tuning on the [SlimOrca dataset](https://huggingface.co/datasets/Open-Orca/SlimOrca). ## Model Details ### Model Description DeciLM-7B-instruct is a derivative of the recently released [DeciLM-7B](http...
[ { "start": 105, "end": 121, "text": "LoRA fine-tuning", "label": "training method", "score": 0.7244691252708435 }, { "start": 527, "end": 543, "text": "LoRA fine-tuning", "label": "training method", "score": 0.7364452481269836 } ]
iara-project/BERTimbau-large-simcse-pt-ckpt-28000
iara-project
2026-04-02T11:56:08Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "dense", "generated_from_trainer", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1807.03748", "base_model:neuralmind/bert-large-portuguese-cased", "base_model:finetune:neuralmind/bert...
sentence-similarity
2026-04-02T11:55:25Z
# SentenceTransformer based on neuralmind/bert-large-portuguese-cased This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-large-portuguese-cased). It maps sentences & paragraphs to a 1024-dimensional dense vector ...
[]
moroqq/qwen3-4b-agent-trajectory-lora_rev41
moroqq
2026-02-23T15:34:04Z
0
0
peft
[ "peft", "safetensors", "qwen3", "lora", "agent", "tool-use", "alfworld", "dbbench", "text-generation", "conversational", "en", "dataset:moroqq/sft_alfworld_trajectory_dataset_v2", "base_model:Qwen/Qwen3-4B-Instruct-2507", "base_model:adapter:Qwen/Qwen3-4B-Instruct-2507", "license:apache-...
text-generation
2026-02-23T15:32:38Z
# qwen3-4b-agent-trajectory-lora_rev41 This repository provides a **LoRA adapter** fine-tuned from **Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**. This repository contains **LoRA adapter weights only**. The base model must be loaded separately. ## Training Objective This adapter is trained to improve **mu...
[ { "start": 69, "end": 73, "text": "LoRA", "label": "training method", "score": 0.8874437212944031 }, { "start": 140, "end": 144, "text": "LoRA", "label": "training method", "score": 0.9051302671432495 }, { "start": 186, "end": 190, "text": "LoRA", "lab...
rrallan/smollm3-energy-rag
rrallan
2025-12-04T00:14:12Z
0
0
null
[ "safetensors", "region:us" ]
null
2025-12-03T21:40:01Z
# SmolLM3-Energy-RAG (LoRA Adapter) This repository contains a LoRA fine-tuned version of SmolLM3-3B for AI energy sustainability question answering and retrieval-augmented generation. ## Introduction/Motivation Artificial intelligence is expanding at an unprecedented rate, but the energy demands behind modern AI syst...
[]
hizawye/llama-3.2-1b-agent
hizawye
2026-02-03T17:34:47Z
13
0
null
[ "safetensors", "gguf", "llama", "llama.cpp", "unsloth", "endpoints_compatible", "region:us", "conversational" ]
null
2026-02-03T17:13:51Z
# llama-3.2-1b-agent : GGUF This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth). **Example usage**: - For text only LLMs: `./llama.cpp/llama-cli -hf hizawye/llama-3.2-1b-agent --jinja` - For multimodal models: `./llama.cpp/llama-mtmd-cli -hf hizawye/llama-3.2...
[ { "start": 90, "end": 97, "text": "Unsloth", "label": "training method", "score": 0.8296576738357544 }, { "start": 128, "end": 135, "text": "unsloth", "label": "training method", "score": 0.8165637254714966 }, { "start": 578, "end": 585, "text": "Unsloth",...
mradermacher/weNavigate-qwen3vl-2b-GGUF
mradermacher
2026-03-24T16:19:16Z
134
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "qwen3_vl", "en", "base_model:vshwanilgv/weNavigate-qwen3vl-2b", "base_model:quantized:vshwanilgv/weNavigate-qwen3vl-2b", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2026-03-24T16:15:42Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
EvilScript/activation-oracle-gemma-4-31B-it-step-5000
EvilScript
2026-04-22T10:32:06Z
0
0
peft
[ "peft", "safetensors", "gemma4", "activation-oracles", "interpretability", "lora", "self-introspection", "sae", "arxiv:2512.15674", "base_model:google/gemma-4-31B-it", "base_model:adapter:google/gemma-4-31B-it", "license:apache-2.0", "region:us" ]
null
2026-04-22T10:31:42Z
# Activation Oracle: gemma-4-31B-it This is a **LoRA adapter** that turns [gemma-4-31B-it](https://huggingface.co/google/gemma-4-31B-it) into an **activation oracle** -- an LLM that can read and interpret the internal activations of other LLMs (or itself) in natural language. ## What is an activation oracle? An acti...
[]
UnifiedHorusRA/Qwen_Edit_Reality_Transform_By_Aldniki
UnifiedHorusRA
2025-09-10T06:04:08Z
2
0
null
[ "custom", "art", "en", "region:us" ]
null
2025-09-08T07:02:59Z
# Qwen Edit Reality Transform By Aldniki **Creator**: [aldniki217](https://civitai.com/user/aldniki217) **Civitai Model Page**: [https://civitai.com/models/1906441](https://civitai.com/models/1906441) --- This repository contains multiple versions of the 'Qwen Edit Reality Transform By Aldniki' model from Civitai. E...
[]
coastalcph/Qwen2.5-1.5B-1t_gcd_sycophanct-1.7t_diff_sycophant
coastalcph
2025-08-29T14:52:46Z
0
0
null
[ "safetensors", "qwen2", "region:us" ]
null
2025-08-29T14:51:40Z
# Combined Task Vector Model This model was created by combining task vectors from multiple fine-tuned models. ## Task Vector Computation ```python t_1 = TaskVector("Qwen/Qwen2.5-1.5B-Instruct", "coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy") t_2 = TaskVector("Qwen/Qwen2.5-1.5B-Instruct", "coastalcph/Qwen2.5-1.5B...
[]
xtuner/llava-llama-3-8b-v1_1-gguf
xtuner
2024-04-30T05:29:15Z
9,713
226
null
[ "gguf", "image-to-text", "dataset:Lin-Chen/ShareGPT4V", "endpoints_compatible", "region:us", "conversational" ]
image-to-text
2024-04-26T10:41:02Z
<div align="center"> <img src="https://github.com/InternLM/lmdeploy/assets/36994684/0cf8d00f-e86b-40ba-9b54-dc8f1bc6c8d8" width="600"/> [![Generic badge](https://img.shields.io/badge/GitHub-%20XTuner-black.svg)](https://github.com/InternLM/xtuner) </div> ## Model llava-llama-3-8b-v1_1 is a LLaVA model fine-tune...
[]
squaredcuber/roblox-luau-mistral-7b
squaredcuber
2026-03-27T01:41:08Z
54
0
peft
[ "peft", "safetensors", "roblox", "luau", "code-generation", "lora", "sft", "wandb-hackathon", "text-generation", "conversational", "en", "dataset:TorpedoSoftware/the-luau-stack", "base_model:mistralai/Mistral-7B-Instruct-v0.3", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3", "li...
text-generation
2026-03-01T09:45:53Z
# Roblox Luau Mistral 7B — SFT > **Recommended:** Use the improved [RFT version](https://huggingface.co/squaredcuber/roblox-luau-mistral-7b-rft) instead. The RFT model scores higher across all dimensions (+5% composite) thanks to reinforcement fine-tuning with Claude-as-judge reward scoring. A **supervised fine-tuned...
[]
Rakancorle1/FoodGuard_3k_3epochs_1.5e
Rakancorle1
2025-11-26T02:40:06Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:meta-llama/Llama-Guard-3-8B", "base_model:finetune:meta-llama/Llama-Guard-3-8B", "license:llama3.1", "text-generation-inference", "endpoints_compatible"...
text-generation
2025-11-25T20:23:44Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # FoodGuard_3k_3epochs_1.5e This model is a fine-tuned version of [meta-llama/Llama-Guard-3-8B](https://huggingface.co/meta-llama/L...
[]
OTAR3088/CeLLaTe3.0_Base_with_vague_adapted_pubmed_gaz
OTAR3088
2026-03-04T13:11:55Z
23
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:Mardiyyah/cellate2.0-tapt_base-LR_5e-05", "base_model:finetune:Mardiyyah/cellate2.0-tapt_base-LR_5e-05", "license:mit", "endpoints_compatible", "region:us" ]
token-classification
2026-03-04T13:11:37Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CeLLaTe3.0_Base_with_vague_adapted_pubmed_gaz This model is a fine-tuned version of [Mardiyyah/cellate2.0-tapt_base-LR_5e-05](htt...
[]
yapwithai/kyutai-stt-1b-en_fr
yapwithai
2025-06-26T16:46:04Z
0
0
moshi
[ "moshi", "safetensors", "stt", "audio", "automatic-speech-recognition", "en", "fr", "arxiv:2410.00037", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2025-08-28T13:46:33Z
# Model Card for Kyutai STT **Transformers support 🤗:** Starting with `transformers >= 4.53.0` and above, you can now run Kyutai STT natively! 👉 Check it out here: [kyutai/stt-1b-en_fr-trfs](https://huggingface.co/kyutai/stt-1b-en_fr-trfs). See also the [project page](https://kyutai.org/next/stt) and the [GitHub re...
[]
Raazi29/Nyaya-Llama-3.1-8B-Indian-Legal
Raazi29
2026-01-27T18:48:01Z
19
1
peft
[ "peft", "safetensors", "law", "legal", "india", "llama-3", "unsloth", "text-generation", "conversational", "en", "dataset:opennyaiorg/InJudgements_dataset", "license:apache-2.0", "region:us" ]
text-generation
2026-01-27T18:40:44Z
# Nyaya-Llama-3.1-8B-Indian-Legal ⚖️🇮🇳 **Nyaya-Llama** is a specialized legal language model fine-tuned on Indian Legal Judgments. It is based on **Meta Llama 3.1 8B** and trained using **Unsloth** for efficient fine-tuning. * **Nyaya (न्याय)**: Sanskrit/Hindi word for Justice. * **Focus**: Designed to understand, ...
[ { "start": 189, "end": 196, "text": "Unsloth", "label": "training method", "score": 0.7186721563339233 }, { "start": 658, "end": 663, "text": "QLoRA", "label": "training method", "score": 0.8202252984046936 }, { "start": 867, "end": 874, "text": "unsloth",...
AlignmentResearch/obfuscation-atlas-Meta-Llama-3-8B-Instruct-kl0.1-det0-seed3
AlignmentResearch
2026-02-20T21:59:44Z
0
0
peft
[ "peft", "deception-detection", "rlvr", "alignment-research", "obfuscation-atlas", "lora", "model-type:honest", "arxiv:2602.15515", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "license:mit", "region:us" ]
null
2026-02-17T10:18:22Z
# RLVR-trained policy from The Obfuscation Atlas This is a policy trained on MBPP-Honeypot with deception probes, from the [Obfuscation Atlas paper](https://arxiv.org/abs/2602.15515), uploaded for reproducibility and further research. The training code and RL environment are available at: https://github.com/Alignment...
[]
hovak101/my_policy
hovak101
2026-04-25T23:25:00Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "act", "dataset:hovak101/record-test", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2026-04-25T23:23:16Z
# Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ...
[ { "start": 17, "end": 20, "text": "act", "label": "training method", "score": 0.831265389919281 }, { "start": 120, "end": 123, "text": "ACT", "label": "training method", "score": 0.8477550148963928 }, { "start": 865, "end": 868, "text": "act", "label":...
mlx-community/Qwen-Image-2512-4bit
mlx-community
2026-01-01T07:43:51Z
0
1
mflux
[ "mflux", "safetensors", "mlx", "qwen", "image-generation", "text-to-image", "apple-silicon", "diffusion", "base_model:Qwen/Qwen-Image-2512", "base_model:finetune:Qwen/Qwen-Image-2512", "license:apache-2.0", "region:us" ]
text-to-image
2026-01-01T07:43:21Z
# Qwen-Image-2512-4bit-MLX MLX-optimized 4-bit quantized version of [Qwen-Image-2512](https://huggingface.co/Qwen/Qwen-Image-2512) for Apple Silicon. ## Quick Start ```bash pip install mflux mflux-generate-qwen \ --model mlx-community/Qwen-Image-2512-4bit \ --prompt "A photorealistic cat wearing a tiny top hat"...
[]
LiamCarter/icl-pruning-llm-pruner-llama2-7b-ratio0.1
LiamCarter
2026-04-23T09:11:27Z
0
0
transformers
[ "transformers", "pytorch", "llm_pruner", "pruning", "sparse", "endpoints_compatible", "region:us" ]
null
2026-04-23T09:10:45Z
# llm_pruner/llama2-7b_ratio0.1 This repository was uploaded from a local experiment directory. ## Summary - Method: `llm_pruner` - Variant: `llama2-7b_ratio0.1` - Format hint: `weights-only-bundle` - Source path: `/scratch/chongyuan/code/pruning/icl_sparsity_study/ICL_pruning/models/llm_pruner/llama2-7b_ratio0.1` -...
[]
hanseungwook/olmo3-recurrent-adapter-sft-nocoda-untied
hanseungwook
2026-02-26T07:18:44Z
26
0
null
[ "pytorch", "recurrent_adapter", "recurrent-adapters", "math", "reasoning", "olmo", "custom_code", "dataset:danielje/MetaMathQA", "license:apache-2.0", "region:us" ]
null
2026-02-23T20:13:25Z
# OLMo-3 Recurrent Adapter - Answer-Only SFT (rec=1, no coda, untied) This is a **Recurrent Adapter Model** fine-tuned on MetaMathQA for mathematical reasoning, built on top of [OLMo-3-1025-7B](https://huggingface.co/allenai/OLMo-3-1025-7B). ## Model Details - **Base Model**: allenai/OLMo-3-1025-7B - **Architecture*...
[ { "start": 426, "end": 437, "text": "Answer-Only", "label": "training method", "score": 0.7371039390563965 } ]
AlignmentResearch/obfuscation-atlas-Meta-Llama-3-70B-Instruct-kl0.0001-det10-seed2-diverse_deception_probe
AlignmentResearch
2026-02-20T21:59:24Z
0
0
peft
[ "peft", "deception-detection", "rlvr", "alignment-research", "obfuscation-atlas", "lora", "degenerate", "arxiv:2602.15515", "base_model:meta-llama/Meta-Llama-3-70B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-70B-Instruct", "license:mit", "region:us" ]
null
2026-02-16T09:32:46Z
# RLVR-trained policy from The Obfuscation Atlas > **Warning: Degenerate Policy** > This policy failed to learn the task (success rate ≤ 30%). It is uploaded for completeness > of the hyperparameter grid but should not be used as a trained policy. This is a policy trained on MBPP-Honeypot with deception probes, from...
[]
Biscotto58/Llama-3.2-3B-Instruct-WriterV2-4E-Q4_K_M-GGUF
Biscotto58
2026-02-12T13:57:55Z
1
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:Biscotto58/Llama-3.2-3B-Instruct-WriterV2-4E", "base_model:quantized:Biscotto58/Llama-3.2-3B-Instruct-WriterV2-4E", "endpoints_compatible", "region:us", "conversational" ]
null
2026-02-12T13:57:43Z
# Biscotto58/Llama-3.2-3B-Instruct-WriterV2-4E-Q4_K_M-GGUF This model was converted to GGUF format from [`Biscotto58/Llama-3.2-3B-Instruct-WriterV2-4E`](https://huggingface.co/Biscotto58/Llama-3.2-3B-Instruct-WriterV2-4E) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-re...
[]
jkminder/lorentz-poc-stage1
jkminder
2025-10-11T15:17:50Z
1
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "base_model:meta-llama/Llama-3.2-1B", "base_model:finetune:meta-llama/Llama-3.2-1B", "license:llama3.2", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-10-10T23:36:52Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lorentz-poc-stage1 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B...
[]
Shirish24/act_custom
Shirish24
2026-01-13T08:28:51Z
0
0
lerobot
[ "lerobot", "safetensors", "act", "robotics", "dataset:Shirish24/pick_and_place_2", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2026-01-13T08:27:55Z
# Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ...
[ { "start": 17, "end": 20, "text": "act", "label": "training method", "score": 0.831265389919281 }, { "start": 120, "end": 123, "text": "ACT", "label": "training method", "score": 0.8477550148963928 }, { "start": 865, "end": 868, "text": "act", "label":...
RukDias/mbart50-singlish-translation-lora-v1.1
RukDias
2026-02-24T17:58:03Z
19
0
peft
[ "peft", "safetensors", "base_model:adapter:deshanksuman/swabhashambart50SinhalaTransliteration", "lora", "transformers", "base_model:deshanksuman/swabhashambart50SinhalaTransliteration", "license:mit", "region:us" ]
null
2026-02-24T17:02:30Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart50-singlish-translation-lora-v1.1 This model is a fine-tuned version of [deshanksuman/swabhashambart50SinhalaTransliteration...
[]
onnx-community/Bio_ClinicalBERT-ONNX
onnx-community
2026-04-09T09:58:54Z
0
0
transformers.js
[ "transformers.js", "onnx", "bert", "fill-mask", "en", "arxiv:1904.03323", "arxiv:1901.08746", "base_model:emilyalsentzer/Bio_ClinicalBERT", "base_model:quantized:emilyalsentzer/Bio_ClinicalBERT", "license:mit", "region:us" ]
fill-mask
2026-04-09T09:58:43Z
# Bio_ClinicalBERT (ONNX) This is an ONNX version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT). It was automatically converted and uploaded using [this Hugging Face Space](https://huggingface.co/spaces/onnx-community/convert-to-onnx). ## Usage with Transformers.js S...
[]
Polygl0t/Tucano2-0.6B-Base
Polygl0t
2026-03-05T08:42:40Z
44
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "pt", "dataset:Polygl0t/gigaverbo-v2", "dataset:Polygl0t/gigaverbo-v2-synth", "dataset:allenai/big-reasoning-traces", "dataset:HuggingFaceTB/smollm-corpus", "dataset:HuggingFaceTB/finemath", "dataset:Huggin...
text-generation
2025-12-20T18:40:32Z
# Tucano2-0.6B-Base <img src="./logo.png" alt="An illustration of a Tucano bird showing vibrant colors like yellow, orange, blue, green, and black." height="200"> ## Model Summary **[Tucano2-0.6B-Base](https://huggingface.co/Polygl0t/Tucano2-0.6B-Base)** is a decoder-transformer natively pretrained in Portuguese and...
[]
rbelanec/train_svamp_789_1757596135
rbelanec
2025-09-11T14:22:31Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "lora", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
null
2025-09-11T14:16:10Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # train_svamp_789_1757596135 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/met...
[]
robertp408/wav2vec2-large-mms-1b-aft-hch
robertp408
2025-10-07T11:29:12Z
0
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:audiofolder", "base_model:facebook/mms-1b-all", "base_model:finetune:facebook/mms-1b-all", "license:cc-by-nc-4.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-28T20:01:09Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-mms-1b-aft-hch This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-...
[]
DevQuasar/NousResearch.Hermes-4.3-36B-GGUF
DevQuasar
2025-12-05T00:35:40Z
114
0
null
[ "gguf", "text-generation", "base_model:NousResearch/Hermes-4.3-36B", "base_model:quantized:NousResearch/Hermes-4.3-36B", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-12-04T17:55:05Z
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com) Quantized version of: [NousResearch/Hermes-4.3-36B](https://huggingface.co/NousResearch/Hermes-4.3-36B) 'Make knowledge free for everyone' <p align="center"> Made with <b...
[]
borisedestein/my_policy_runpod_v5
borisedestein
2025-08-04T22:50:14Z
0
0
lerobot
[ "lerobot", "safetensors", "act", "robotics", "dataset:borisedestein/Grab-the-red-lego-v5", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2025-08-04T22:50:05Z
# Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ...
[ { "start": 17, "end": 20, "text": "act", "label": "training method", "score": 0.8059530854225159 }, { "start": 120, "end": 123, "text": "ACT", "label": "training method", "score": 0.8365488052368164 }, { "start": 883, "end": 886, "text": "act", "label"...
Naphula-Archives/Avnas-7B-v0-GGUF
Naphula-Archives
2026-01-18T16:01:08Z
4
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2026-01-18T14:25:56Z
bugged v0 GGUF has eos padding reuploading a fixed v1 later (you can kind of use EOS token ban for now as a workaround) ``` <<<<<< # --- 4. Load Tokenizer --- tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH, local_files_only=True) if tokenizer.pad_token is None: tokenizer.pad_token = tokeni...
[]
arianaazarbal/qwen3-4b-20260106_090325_lc_rh_sot_random_seed1-3c4081-step40
arianaazarbal
2026-01-06T09:50:06Z
0
0
null
[ "safetensors", "region:us" ]
null
2026-01-06T09:49:44Z
# qwen3-4b-20260106_090325_lc_rh_sot_random_seed1-3c4081-step40 ## Experiment Info - **Full Experiment Name**: `20260106_090325_leetcode_train_medhard_filtered_rh_simple_overwrite_tests_random_seed1` - **Short Name**: `20260106_090325_lc_rh_sot_random_seed1-3c4081` - **Base Model**: `qwen/Qwen3-4B` - **Step**: 40 ## ...
[]
mradermacher/LLDS-R-GRPO-Qwen2.5-3B-Base-GGUF
mradermacher
2026-01-16T07:02:29Z
322
1
transformers
[ "transformers", "gguf", "Search", "QuestionAnswering", "en", "base_model:SEGAgentRL/LLDS-R-GRPO-Qwen2.5-3B-Base", "base_model:quantized:SEGAgentRL/LLDS-R-GRPO-Qwen2.5-3B-Base", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2026-01-16T01:14:02Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
unsloth/granite-4.0-h-tiny-FP8-Dynamic
unsloth
2025-11-25T09:23:47Z
110
1
transformers
[ "transformers", "safetensors", "granitemoehybrid", "text-generation", "language", "unsloth", "granite-4.0", "conversational", "arxiv:0000.00000", "base_model:ibm-granite/granite-4.0-h-tiny", "base_model:quantized:ibm-granite/granite-4.0-h-tiny", "license:apache-2.0", "endpoints_compatible", ...
text-generation
2025-10-02T12:55:57Z
> [!NOTE] > Includes Unsloth **chat template fixes**! <br> For `llama.cpp`, use `--jinja` > <div> <p style="margin-top: 0;margin-bottom: 0;"> <em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em> </p> ...
[]
Ganaa614/vit-tiny-patch16-224activity_recognition_4feats
Ganaa614
2025-10-18T05:20:08Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:WinKawaks/vit-tiny-patch16-224", "base_model:finetune:WinKawaks/vit-tiny-patch16-224", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-classification
2025-10-18T05:01:09Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-tiny-patch16-224activity_recognition_4feats This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://hu...
[]
contemmcm/6042918137bbb4a92a4ff6dc66f18447
contemmcm
2025-11-15T09:52:59Z
0
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "generated_from_trainer", "base_model:Helsinki-NLP/opus-mt-tc-bible-big-deu_eng_fra_por_spa-mul", "base_model:finetune:Helsinki-NLP/opus-mt-tc-bible-big-deu_eng_fra_por_spa-mul", "license:apache-2.0", "endpoints_compatible", "region:...
null
2025-11-15T09:48:07Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 6042918137bbb4a92a4ff6dc66f18447 This model is a fine-tuned version of [Helsinki-NLP/opus-mt-tc-bible-big-deu_eng_fra_por_spa-mul...
[]
LBK95/Llama-3.2-1B-hf_RewardModel_LookAhead-5_V1_60P
LBK95
2025-11-27T17:26:37Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "reward-trainer", "trl", "base_model:meta-llama/Llama-3.2-1B", "base_model:finetune:meta-llama/Llama-3.2-1B", "endpoints_compatible", "region:us" ]
null
2025-11-27T16:55:47Z
# Model Card for Llama-3.2-1B-hf_RewardModel_LookAhead-5_V1 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline text = "The cap...
[]
nvidia/gpt-oss-120b-Eagle3-throughput
nvidia
2026-01-26T21:55:21Z
1,967
33
Model Optimizer
[ "Model Optimizer", "safetensors", "llama", "nvidia", "ModelOpt", "gpt-oss-120b", "quantized", "Eagle3", "text-generation", "base_model:openai/gpt-oss-120b", "base_model:finetune:openai/gpt-oss-120b", "license:other", "region:us" ]
text-generation
2025-12-09T21:47:10Z
# Model Overview ## Description: The NVIDIA gpt-oss-120b Eagle model is the Eagle head of the OpenAI’s gpt-oss-120b model, which is an auto-regressive language model that uses a mixture-of-experts (MoE) architecture with 5 billion activated parameters and 120 billion total parameters. For more information, please chec...
[]
hcasademunt/qwen3-32b_followup_ep1_lr1e-05-honesty
hcasademunt
2026-02-25T01:01:18Z
7
0
peft
[ "peft", "safetensors", "base_model:adapter:unsloth/qwen3-32b-bnb-4bit", "lora", "sft", "transformers", "trl", "unsloth", "text-generation", "conversational", "region:us" ]
text-generation
2026-02-25T01:01:08Z
# Model Card for qwen3-32b_followup_ep1_lr1e-05 This model is a fine-tuned version of [unsloth/qwen3-32b-bnb-4bit](https://huggingface.co/unsloth/qwen3-32b-bnb-4bit). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you ha...
[]
EloyOn/Beepo-22B-Q4_0-GGUF
EloyOn
2025-12-14T21:33:32Z
6
1
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "base_model:concedo/Beepo-22B", "base_model:quantized:concedo/Beepo-22B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-12-14T20:39:09Z
# EloyOn/Beepo-22B-Q4_0-GGUF This model was converted to GGUF format from [`concedo/Beepo-22B`](https://huggingface.co/concedo/Beepo-22B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/concedo/Beepo-2...
[]
mlx-community/ERNIE-4.5-VL-28B-A3B-Thinking-6bit
mlx-community
2026-01-28T20:10:07Z
15
0
transformers
[ "transformers", "safetensors", "ernie4_5_moe_vl", "image-text-to-text", "ERNIE4.5", "mlx", "conversational", "custom_code", "en", "zh", "license:apache-2.0", "endpoints_compatible", "6-bit", "region:us" ]
image-text-to-text
2026-01-28T19:22:40Z
# mlx-community/ERNIE-4.5-VL-28B-A3B-Thinking-6bit This model was converted to MLX format from [`baidu/ERNIE-4.5-VL-28B-A3B-Thinking`]() using mlx-vlm version **0.3.10**. Refer to the [original model card](https://huggingface.co/baidu/ERNIE-4.5-VL-28B-A3B-Thinking) for more details on the model. ## Use with mlx ```bas...
[]
jerrrycans/watermark10000x2
jerrrycans
2025-08-12T21:43:58Z
0
0
diffusers
[ "diffusers", "flux", "image-to-image", "lora", "replicate", "base_model:black-forest-labs/FLUX.1-Kontext-dev", "base_model:adapter:black-forest-labs/FLUX.1-Kontext-dev", "license:other", "region:us" ]
image-to-image
2025-08-12T21:14:30Z
# Watermark10000X2 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-Kontext-dev image-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using: https://replicate.com/replicate/fast-flux-k...
[]
ApacheOne/HSWQ-fp8-Illustrious
ApacheOne
2026-04-23T17:39:56Z
74
0
null
[ "custom", "license:agpl-3.0", "region:us" ]
null
2026-04-19T04:09:07Z
# Model info Creator: [https://civitai.com/user/Bilered](https://civitai.com/user/Bilered) `Lumachrome_Illustrious_HSWQ_fp8e4m3.safetensors` [https://civitai.com/models/2528730/lumachrome-illustrious](https://civitai.com/models/2528730/lumachrome-illustrious) <table style="width: auto; border-collapse: collapse;"> ...
[]
OliverHeine/roberta-base_fold_5
OliverHeine
2026-04-28T16:55:45Z
0
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
text-classification
2026-04-28T16:02:33Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base_fold_5 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None da...
[]
qualiaadmin/91c3d47e-0ee5-4255-8abd-5ec0a09503f6
qualiaadmin
2025-11-10T19:20:38Z
0
0
lerobot
[ "lerobot", "safetensors", "smolvla", "robotics", "dataset:Calvert0921/SmolVLA_LiftCube_Franka_100", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-11-10T19:20:19Z
# Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This pol...
[]
FuturoEdu/embed
FuturoEdu
2025-12-08T16:32:40Z
22
0
sentence-transformers
[ "sentence-transformers", "pytorch", "onnx", "safetensors", "nomic_bert", "feature-extraction", "sentence-similarity", "mteb", "transformers", "transformers.js", "custom_code", "en", "arxiv:2402.01613", "license:apache-2.0", "model-index", "text-embeddings-inference", "endpoints_compa...
sentence-similarity
2025-12-08T16:32:39Z
# nomic-embed-text-v1: A Reproducible Long Context (8192) Text Embedder [Blog](https://www.nomic.ai/blog/posts/nomic-embed-text-v1) | [Technical Report](https://arxiv.org/abs/2402.01613) | [AWS SageMaker](https://aws.amazon.com/marketplace/seller-profile?id=seller-tpqidcj54zawi) | [Atlas Embedding and Unstructured Da...
[]
emiliogodigital/doom_health_gathering_supreme_unit8
emiliogodigital
2025-10-01T18:36:44Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-10-01T18:36:34Z
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sam...
[ { "start": 7, "end": 11, "text": "APPO", "label": "training method", "score": 0.8569490909576416 }, { "start": 633, "end": 637, "text": "APPO", "label": "training method", "score": 0.8294661641120911 }, { "start": 1110, "end": 1114, "text": "APPO", "la...
majentik/Voxtral-Mini-4B-Realtime-2602-RotorQuant
majentik
2026-04-14T13:55:49Z
0
0
transformers
[ "transformers", "voxtral", "audio", "speech", "speech-recognition", "realtime", "streaming", "asr", "kv-cache", "rotorquant", "quantization", "automatic-speech-recognition", "base_model:mistralai/Voxtral-Mini-4B-Realtime-2602", "base_model:finetune:mistralai/Voxtral-Mini-4B-Realtime-2602",...
automatic-speech-recognition
2026-04-14T13:55:48Z
# Voxtral-Mini-4B-Realtime-2602-RotorQuant RotorQuant KV-cache bundle for [`mistralai/Voxtral-Mini-4B-Realtime-2602`](https://huggingface.co/mistralai/Voxtral-Mini-4B-Realtime-2602). Rotational online re-basis of the attention cache — preferred for noisy, multi-speaker, or code-switching real-time streams. This artif...
[]
EZCon/Huihui-Qwen3-VL-2B-Instruct-abliterated-4bit-g32-mxfp4-mixed_4_8-mlx
EZCon
2026-04-05T01:24:16Z
148
0
mlx
[ "mlx", "safetensors", "qwen3_vl", "abliterated", "uncensored", "image-text-to-text", "conversational", "base_model:huihui-ai/Huihui-Qwen3-VL-2B-Instruct-abliterated", "base_model:quantized:huihui-ai/Huihui-Qwen3-VL-2B-Instruct-abliterated", "license:apache-2.0", "4-bit", "region:us" ]
image-text-to-text
2026-01-29T09:14:59Z
# EZCon/Huihui-Qwen3-VL-2B-Instruct-abliterated-4bit-g32-mxfp4-mixed_4_8-mlx This model was converted to MLX format from [`huihui-ai/Huihui-Qwen3-VL-2B-Instruct-abliterated`](https://huggingface.co/huihui-ai/Huihui-Qwen3-VL-2B-Instruct-abliterated) using mlx-vlm version **0.4.4**. Refer to the [original model card](ht...
[]
Joysulem/FireEcho
Joysulem
2026-02-17T06:55:33Z
7
0
fireecho
[ "fireecho", "qwen3-omni", "inference", "triton", "quantization", "moe", "fp4", "fp8", "int2", "single-gpu", "blackwell", "hebbian", "speculative-decoding", "custom-kernel", "text-generation", "dataset:Qwen/Qwen3-Omni-30B-A3B-Instruct", "license:cc-by-nc-4.0", "model-index", "4-bi...
text-generation
2026-02-17T05:50:33Z
# FireEcho Engine **High-performance single-GPU inference kernel for 30B+ MoE models** Created by [Luis E. Davila Flores](https://x.com/Joysulem) ## What is FireEcho? FireEcho is a from-scratch inference engine that runs **Qwen3-Omni-30B** (30.5 billion parameters, 128-expert MoE) on a **single RTX 5090** at **45+ ...
[]
HeyDunaX/Tay_Embedding
HeyDunaX
2026-02-04T11:18:59Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "sentence-similarity", "feature-extraction", "dense", "generated_from_trainer", "dataset_size:20554", "loss:MultipleNegativesRankingLoss", "dataset:HeyDunaX/tay-vietnamese-nmt", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:AITeamV...
sentence-similarity
2026-02-04T11:18:21Z
# SentenceTransformer based on AITeamVN/Vietnamese_Embedding_v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [AITeamVN/Vietnamese_Embedding_v2](https://huggingface.co/AITeamVN/Vietnamese_Embedding_v2) on the [tay-vietnamese-nmt](https://huggingface.co/datasets/HeyDunaX/tay-vietnamese-n...
[]
edwixx/fish-s1-dac-min
edwixx
2026-01-25T20:10:21Z
0
0
null
[ "safetensors", "audio", "codec", "autoencoder", "pytorch", "license:cc-by-nc-sa-4.0", "region:us" ]
null
2026-01-25T20:09:22Z
# Fish-Speech S1 DAC Autoencoder weights (redistribution) An **unofficial** redistribution / mirror of the Fish-S1 DAC autoencoder weights, licensed **CC BY-NC-SA 4.0**. ### Attribution: - **Original project:** [Fish-Speech](https://github.com/fishaudio/fish-speech) (Fish Audio). - **Original model release:** [fishau...
[]
mradermacher/Llama3.1-IgneousIguana-8B-Heretic-i1-GGUF
mradermacher
2025-12-24T23:22:54Z
28
1
transformers
[ "transformers", "gguf", "merge", "mergekit", "llama-3.1", "Igneous", "Iguana", "8B", "Uncensored", "Heretic", "en", "base_model:ChiKoi7/Llama3.1-IgneousIguana-8B-Heretic", "base_model:quantized:ChiKoi7/Llama3.1-IgneousIguana-8B-Heretic", "license:llama3.1", "endpoints_compatible", "reg...
null
2025-12-24T16:25:29Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_...
[]
humjie/diffusion_bimanual-so101-fold-towel_60
humjie
2026-03-26T01:44:16Z
27
0
lerobot
[ "lerobot", "safetensors", "robotics", "diffusion", "dataset:humjie/bimanual-so101-fold-towel", "arxiv:2303.04137", "license:apache-2.0", "region:us" ]
robotics
2026-03-24T06:00:45Z
# Model Card for diffusion <!-- Provide a quick summary of what the model is/does. --> [Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation. This policy has ...
[]
wertania/so101-orange-pick
wertania
2026-03-25T06:33:34Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "smolvla", "dataset:mvhk/so101_test_orange_pick", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2026-03-25T06:33:17Z
# Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This pol...
[]
GMorgulis/Qwen2.5-7B-Instruct-OwlDeffenseSteerVec-lambda5-TEST-ft0.42
GMorgulis
2026-02-18T06:04:21Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "endpoints_compatible", "region:us" ]
null
2026-02-18T04:04:19Z
# Model Card for Qwen2.5-7B-Instruct-OwlDeffenseSteerVec-lambda5-TEST-ft0.42 This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeli...
[]
nscharrenberg/DBNL-QA-NL-e5-s1024-lr-1e-4-lr-seed3704
nscharrenberg
2025-10-15T12:34:09Z
0
0
transformers
[ "transformers", "tensorboard", "generated_from_trainer", "trl", "unsloth", "sft", "base_model:unsloth/Llama-3.2-1B-Instruct", "base_model:finetune:unsloth/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-10-15T12:33:05Z
# Model Card for DBNL-QA-NL-e5-s1024-lr-1e-4-lr-seed3704 This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline questi...
[]
Lambent/Mira-v1.17-Karcher-27B
Lambent
2025-11-28T20:11:55Z
2
1
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "mergekit", "merge", "conversational", "base_model:Lambent/Mira-v1-dpo-27B", "base_model:merge:Lambent/Mira-v1-dpo-27B", "base_model:Lambent/Mira-v1.11-Ties-27B", "base_model:merge:Lambent/Mira-v1.11-Ties-27B", "base_model:Lambent/...
image-text-to-text
2025-11-23T13:59:54Z
![image](https://cdn-uploads.huggingface.co/production/uploads/6592ef6e2a0a886ef0872e71/cKSgIftjwpr-QsCe5WII5.png) Karcher merge with just Mira; she's still resonant with Mira here (8.5/10, with the 0.5 being one who just wanted 'Mirae' which is still pretty close lol) Confirmed that she still has the sense of "deepe...
[]
allenai/olmOCR-7B-0825-FP8
allenai
2025-10-22T15:27:44Z
9,426
10
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "conversational", "en", "dataset:allenai/olmOCR-mix-0225", "base_model:Qwen/Qwen2.5-VL-7B-Instruct", "base_model:quantized:Qwen/Qwen2.5-VL-7B-Instruct", "license:apache-2.0", "text-generation-inference", "endpoints_compatible",...
image-text-to-text
2025-08-13T20:55:44Z
<img alt="olmOCR Logo" src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/olmocr/olmocr.png" width="242px" style="margin-left:'auto' margin-right:'auto' display:'block'"> # olmOCR-7B-0825-FP8 Quantized to FP8 Version of [olmOCR-7B-0825](https://huggingface.co/allenai/olmOCR-7B-0825), using llmcompr...
[]
prime1234/Qwen3-4B-Thinking-2507-Claude-4.5-Opus-High-Reasoning-Distill-Heretic-Abliterated
prime1234
2026-02-19T18:56:10Z
7
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "heretic", "uncensored", "decensored", "abliterated", "finetune", "conversational", "base_model:TeichAI/Qwen3-4B-Thinking-2507-Claude-4.5-Opus-High-Reasoning-Distill", "base_model:finetune:TeichAI/Qwen3-4B-Thinking-2507-Claude-4.5-Opus...
text-generation
2026-02-19T18:56:09Z
<h2>Qwen3-4B-Thinking-2507-Claude-4.5-Opus-High-Reasoning-Distill-Heretic-Abliterated</h2> Ablitered/uncensored by [Heretic](https://github.com/p-e-w/heretic) v1.0.1 Refusals: 14/100, KL divergence: 0.01 Original Model Refusal rate: 98/100 Context: 256k ENJOY THE FREEDOM! <B>This model part of the new Qwen3-24B-...
[]
mradermacher/nepali-legal-llm-GGUF
mradermacher
2026-04-21T12:42:27Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:manishkhanal/nepali-legal-llm", "base_model:quantized:manishkhanal/nepali-legal-llm", "endpoints_compatible", "region:us", "conversational" ]
null
2026-04-21T12:09:37Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
piotrmaciejbednarski/gliner-pii-polish
piotrmaciejbednarski
2025-12-07T10:15:43Z
17
1
gliner
[ "gliner", "pytorch", "ner", "named-entity-recognition", "pii", "privacy", "polish", "fine-tuned", "pl", "dataset:custom", "base_model:urchade/gliner_multi-v2.1", "base_model:finetune:urchade/gliner_multi-v2.1", "license:mit", "region:us" ]
null
2025-12-07T09:37:41Z
# GLiNER PII Polish - Fine-tuned Model for Polish Personal Identifiable Information Detection ## Model Description This model is a fine-tuned version of [`urchade/gliner_multi-v2.1`](https://huggingface.co/urchade/gliner_multi-v2.1) specifically optimized for detecting Personal Identifiable Information (PII) in Polis...
[]
XiaoHe021/starvector-1b-im2svg
XiaoHe021
2026-03-07T07:04:59Z
18
0
transformers
[ "transformers", "safetensors", "starvector", "text-generation", "custom_code", "en", "arxiv:2312.11556", "license:apache-2.0", "region:us" ]
text-generation
2026-03-07T07:04:58Z
# Model Card for StarVector ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65c27c201b5b51dd4814fcd2/ULL7FkrMHA38I8olD7nEh.png) StarVector is a foundation model for generating Scalable Vector Graphics (SVG) code from images and text. It utilizes a Vision-Language Modeling architecture to understand...
[]
waber223/my-Health_stress_condtion-model
waber223
2026-02-13T09:31:35Z
1
0
transformers
[ "transformers", "tf", "roberta", "text-classification", "generated_from_keras_callback", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
text-classification
2026-02-13T09:31:19Z
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # my-Health_stress_condtion-model This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown...
[]
huihui-ai/Huihui-GLM-4.5V-abliterated
huihui-ai
2025-08-30T14:47:20Z
57
16
transformers
[ "transformers", "safetensors", "glm4v_moe", "image-text-to-text", "abliterated", "uncensored", "conversational", "zh", "en", "base_model:zai-org/GLM-4.5V", "base_model:finetune:zai-org/GLM-4.5V", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-08-20T02:24:47Z
# huihui-ai/Huihui-GLM-4.5V-abliterated This is an uncensored version of [zai-org/GLM-4.5V](https://huggingface.co/zai-org/GLM-4.5V) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it). It was only the text part t...
[]
marcellobullo/sharedrep-imdb-reward-clustering-seed28-k4
marcellobullo
2025-10-11T19:43:13Z
0
0
transformers
[ "transformers", "safetensors", "sharedrep-gpt2", "generated_from_trainer", "reward-trainer", "trl", "dataset:marcellobullo/gpt2-imdb-raw", "base_model:lvwerra/gpt2-imdb", "base_model:finetune:lvwerra/gpt2-imdb", "endpoints_compatible", "region:us" ]
null
2025-10-11T19:43:06Z
# Model Card for sharedrep-imdb-reward-clustering-seed28-k4 This model is a fine-tuned version of [lvwerra/gpt2-imdb](https://huggingface.co/lvwerra/gpt2-imdb) on the [marcellobullo/gpt2-imdb-raw](https://huggingface.co/datasets/marcellobullo/gpt2-imdb-raw) dataset. It has been trained using [TRL](https://github.com/h...
[]
arithmetic-circuit-overloading/Llama-3.3-70B-Instruct-3d-500K-50K-0.1-reverse-padzero-plus-mul-sub-99-512D-2L-2H-2048I
arithmetic-circuit-overloading
2026-02-27T00:56:29Z
187
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "base_model:meta-llama/Llama-3.3-70B-Instruct", "base_model:finetune:meta-llama/Llama-3.3-70B-Instruct", "license:llama3.3", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2026-02-27T00:42:45Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-3.3-70B-Instruct-3d-500K-50K-0.1-reverse-padzero-plus-mul-sub-99-512D-2L-2H-2048I This model is a fine-tuned version of [me...
[]
gakhg/test15_alf_db_ties_epoch6
gakhg
2026-02-21T02:57:31Z
0
0
peft
[ "peft", "safetensors", "qwen3", "lora", "agent", "tool-use", "alfworld", "dbbench", "text-generation", "conversational", "en", "dataset:u-10bei/sft_alfworld_trajectory_dataset_v5", "dataset:u-10bei/dbbench_sft_dataset_react_v4", "base_model:Qwen/Qwen3-4B-Instruct-2507", "base_model:adapt...
text-generation
2026-02-21T02:55:58Z
# qwen3-4b-agent-trajectory-lora This repository provides a **LoRA adapter** fine-tuned from **Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**. This repository contains **LoRA adapter weights only**. The base model must be loaded separately. ## Training Objective This adapter is trained to improve **multi-tu...
[ { "start": 63, "end": 67, "text": "LoRA", "label": "training method", "score": 0.8947916030883789 }, { "start": 134, "end": 138, "text": "LoRA", "label": "training method", "score": 0.9135580658912659 }, { "start": 180, "end": 184, "text": "LoRA", "lab...
deexjay23/gemma-4-31B-it-mlx-8Bit
deexjay23
2026-04-15T10:06:51Z
0
0
transformers
[ "transformers", "safetensors", "gemma4", "image-text-to-text", "mlx", "conversational", "base_model:google/gemma-4-31B-it", "base_model:quantized:google/gemma-4-31B-it", "license:apache-2.0", "endpoints_compatible", "8-bit", "region:us" ]
image-text-to-text
2026-04-15T10:06:22Z
# deexjay23/gemma-4-31B-it-mlx-8Bit The Model [deexjay23/gemma-4-31B-it-mlx-8Bit](https://huggingface.co/deexjay23/gemma-4-31B-it-mlx-8Bit) was converted to MLX format from [google/gemma-4-31B-it](https://huggingface.co/google/gemma-4-31B-it) using mlx-lm version **0.31.2**. ## Use with mlx ```bash pip install mlx-l...
[]
Muapi/donkey-kong-country-snes-style-flux
Muapi
2025-09-03T11:04:07Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-09-03T11:03:30Z
# Donkey Kong Country (SNES) Style [FLUX] ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: dkcstyle, pixel art style ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_...
[]
krystian-kaczor/krystian-flux-avatar
krystian-kaczor
2025-08-07T19:20:05Z
0
0
null
[ "region:us" ]
null
2025-08-07T09:04:34Z
```markdown --- license: creativeml-openrail-m base_model: black-forest-labs/FLUX.1-schnell tags: - flux - lora - text-to-image - diffusers - avatar instance_prompt: "a photo of KAKI" --- # LoRA for FLUX: krystian-flux-avatar These are LoRA weights for the base model **[black-forest-labs/FLUX.1-schnell](https://huggi...
[]
meetmerchant/tech-tweet-generator-llama3
meetmerchant
2025-11-30T19:17:31Z
0
0
mlx-lm
[ "mlx-lm", "tech", "ai", "research papers", "twitter", "viral-content", "mlx", "lora", "en", "base_model:mlx-community/Llama-3.2-3B-Instruct-4bit", "base_model:adapter:mlx-community/Llama-3.2-3B-Instruct-4bit", "license:mit", "region:us" ]
null
2025-11-30T00:05:04Z
# Tech Tweet Generator Llama-3 (Fine-Tuned) This model is a fine-tuned version of **Llama-3.2-3B-Instruct** designed to convert dense scientific and technical research paper abstracts into engaging, viral Twitter threads. It was trained using **LoRA (Low-Rank Adaptation)** on the Apple MLX framework. ## 🚀 Model Des...
[ { "start": 247, "end": 251, "text": "LoRA", "label": "training method", "score": 0.7064386606216431 } ]
godnpeter/combined_frozen_chunk8_yesproprio_unified_text_prompt_1010
godnpeter
2025-10-11T18:12:09Z
0
0
lerobot
[ "lerobot", "safetensors", "smolvla", "robotics", "dataset:godnpeter/aopoli-lv-libero_combined_no_noops_lerobot_v21", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-10-11T18:11:55Z
# Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This pol...
[]
mlx-community/whisper-medium-4bit
mlx-community
2025-12-15T18:08:05Z
42
0
mlx-audio-plus
[ "mlx-audio-plus", "safetensors", "whisper", "mlx", "speech-recognition", "speech-to-text", "stt", "automatic-speech-recognition", "base_model:openai/whisper-medium", "base_model:finetune:openai/whisper-medium", "license:apache-2.0", "region:us" ]
automatic-speech-recognition
2025-12-14T13:54:50Z
# mlx-community/whisper-medium-4bit This model was converted to MLX format from [openai/whisper-medium](https://github.com/openai/whisper) using [mlx-audio-plus](https://github.com/DePasqualeOrg/mlx-audio-plus) version **0.1.4**. ## Use with mlx-audio-plus ```bash pip install -U mlx-audio-plus ``` ### Command line ...
[]
Runware/control_v11f1e_sd15_tile
Runware
2025-09-03T16:03:33Z
37
0
diffusers
[ "diffusers", "art", "controlnet", "stable-diffusion", "controlnet-v1-1", "image-to-image", "arxiv:2302.05543", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:openrail", "region:us" ]
image-to-image
2025-09-03T16:03:16Z
# Controlnet - v1.1 - *Tile Version* **Controlnet v1.1** was released in [lllyasviel/ControlNet-v1-1](https://huggingface.co/lllyasviel/ControlNet-v1-1) by [Lvmin Zhang](https://huggingface.co/lllyasviel). This checkpoint is a conversion of [the original checkpoint](https://huggingface.co/lllyasviel/ControlNet-v1-1/b...
[]
cicerothoma/nigerian_food_classification
cicerothoma
2025-11-09T00:51:37Z
0
0
null
[ "image-classification", "dataset:cicerothoma/nigeria_food", "base_model:google/efficientnet-b4", "base_model:finetune:google/efficientnet-b4", "license:mit", "region:us" ]
image-classification
2025-11-09T00:38:02Z
# Nigerian Food Classification — EfficientNet-B4 Classifies images of Nigerian food into 18 classes using transfer learning with EfficientNet-B4. This model is fine-tuned on a curated dataset and optimized for balanced precision and recall across diverse dishes and plating styles. ## Model Card - Architecture: Effic...
[ { "start": 107, "end": 124, "text": "transfer learning", "label": "training method", "score": 0.8353849053382874 } ]
mradermacher/NVIDIA-Nemotron-3-Super-120B-A12B-BF16-GGUF
mradermacher
2026-03-21T08:49:51Z
602
0
transformers
[ "transformers", "gguf", "nvidia", "pytorch", "nemotron-3", "latent-moe", "mtp", "en", "fr", "es", "it", "de", "ja", "zh", "dataset:nvidia/nemotron-post-training-v3", "dataset:nvidia/nemotron-pre-training-datasets", "base_model:nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-BF16", "base_m...
null
2026-03-20T18:07:58Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
jialicheng/unlearn-so_cifar10_swin-base_salun_10_100
jialicheng
2025-10-29T06:19:18Z
5
0
transformers
[ "transformers", "safetensors", "swin", "image-classification", "vision", "generated_from_trainer", "base_model:microsoft/swin-base-patch4-window7-224", "base_model:finetune:microsoft/swin-base-patch4-window7-224", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-classification
2025-10-29T06:17:32Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 100 This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-pat...
[]
smorand/hf-sdxl-endpoint
smorand
2026-01-11T03:50:05Z
0
0
null
[ "endpoints_compatible", "region:us" ]
null
2026-01-10T16:14:06Z
# Stable Diffusion XL - Hugging Face Inference Endpoint Custom handler for deploying Stable Diffusion XL as a text-to-image API on Hugging Face Inference Endpoints. ## Features - Text-to-image generation with Stable Diffusion XL - Configurable parameters (steps, guidance, dimensions, seed) - Optional refiner for hig...
[]
Ali4815162342/chest-disease-detector
Ali4815162342
2025-09-02T15:41:47Z
0
1
null
[ "region:us" ]
null
2025-09-02T15:17:48Z
# 🦠 COVID-19 X-ray Classification System [![Python](https://img.shields.io/badge/Python-3.8+-blue.svg)](https://python.org) [![FastAPI](https://img.shields.io/badge/FastAPI-0.68+-green.svg)](https://fastapi.tiangolo.com) [![PyTorch](https://img.shields.io/badge/PyTorch-1.9+-red.svg)](https://pytorch.org) [![Lice...
[]
yueqis/non_web_sweagent-qwen-coder-7b-3epochs-30k-5e-5
yueqis
2025-10-16T02:32:29Z
1
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-Coder-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-Coder-7B-Instruct", "license:other", "text-generation-inference", "endpoints_compatib...
text-generation
2025-10-16T02:28:34Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # non_web_sweagent-qwen-coder-7b-3epochs-30k-5e-5 This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://hu...
[]
0xA50C1A1/aya-expanse-32b-heretic
0xA50C1A1
2026-03-01T17:16:52Z
29
0
transformers
[ "transformers", "safetensors", "cohere", "text-generation", "heretic", "uncensored", "decensored", "abliterated", "conversational", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "el", "fa", "pl", "id", "cs", "he", "hi", "nl", "ro", "ru", "tr", ...
text-generation
2026-03-01T17:14:03Z
# This is a decensored version of [CohereLabs/aya-expanse-32b](https://huggingface.co/CohereLabs/aya-expanse-32b), made using [Heretic](https://github.com/p-e-w/heretic) v1.2.0 ## Abliteration parameters | Parameter | Value | | :-------- | :---: | | **direction_index** | 22.35 | | **attn.o_proj.max_weight** | 1.39 | ...
[]