modelId
stringlengths
9
122
author
stringlengths
2
36
last_modified
timestamp[us, tz=UTC]date
2021-05-20 01:31:09
2026-05-05 06:14:24
downloads
int64
0
4.03M
likes
int64
0
4.32k
library_name
stringclasses
189 values
tags
listlengths
1
237
pipeline_tag
stringclasses
53 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2026-05-05 05:54:22
card
stringlengths
500
661k
entities
listlengths
0
12
fariasultanacodes/magic
fariasultanacodes
2025-11-17T06:09:57Z
0
0
transformers
[ "transformers", "safetensors", "text-generation", "magic", "mmlu", "causal-lm", "conversational", "en", "base_model:Qwen/Qwen2.5-1.5B", "base_model:finetune:Qwen/Qwen2.5-1.5B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-11-17T05:42:58Z
# Magic Model 🪄 Fine-tuned language model for MMLU-style question answering. **Developed by Likhon Sheikh** 🚀 ## Features - ✅ Multi-safetensor support - ✅ Fast tokenizer with tokenizer.json - ✅ LoRA fine-tuning for efficiency - ✅ MMLU-optimized responses - ✅ Production-ready deployment ## Usage ```python from t...
[]
feabries/dog_dreambooth_model_prior
feabries
2026-03-04T09:54:51Z
12
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "endpoints_compati...
text-to-image
2026-03-04T08:58:31Z
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - feabries/dog_dreambooth_model_prior This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. ...
[ { "start": 199, "end": 209, "text": "DreamBooth", "label": "training method", "score": 0.9621815085411072 }, { "start": 259, "end": 269, "text": "dreambooth", "label": "training method", "score": 0.9596779346466064 }, { "start": 374, "end": 384, "text": "D...
tiiuae/Falcon-H1-Tiny-R-0.6B-GGUF
tiiuae
2026-01-21T19:37:19Z
496
8
transformers
[ "transformers", "gguf", "falcon-h1", "edge", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2026-01-13T06:53:58Z
<img src="https://cdn-uploads.huggingface.co/production/uploads/62441d1d9fdefb55a0b7d12c/l1du02RjuAZJcksI5tQ-F.png" alt="drawing" width="800"/> # Table of Contents 0. [TL;DR](#TL;DR) 1. [Model Details](#model-details) 2. [Training Details](#training-details) 3. [Usage](#usage) 4. [Evaluation](#evaluation) 5. [Citati...
[]
mradermacher/concise-cbt-therapist-qwen3-1.7b-i1-GGUF
mradermacher
2026-01-28T14:46:11Z
11
0
transformers
[ "transformers", "gguf", "en", "base_model:therapygym/concise-cbt-therapist-qwen3-1.7b", "base_model:quantized:therapygym/concise-cbt-therapist-qwen3-1.7b", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2026-01-28T13:43:57Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_...
[]
AlignmentResearch/obfuscation-atlas-Meta-Llama-3-8B-Instruct-kl0.01-det1-seed3-deception_probe
AlignmentResearch
2026-02-20T21:59:27Z
0
0
peft
[ "peft", "deception-detection", "rlvr", "alignment-research", "obfuscation-atlas", "lora", "model-type:honest", "arxiv:2602.15515", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "license:mit", "region:us" ]
null
2026-02-17T10:05:12Z
# RLVR-trained policy from The Obfuscation Atlas This is a policy trained on MBPP-Honeypot with deception probes, from the [Obfuscation Atlas paper](https://arxiv.org/abs/2602.15515), uploaded for reproducibility and further research. The training code and RL environment are available at: https://github.com/Alignment...
[]
mlx-community/MiroThinker-1.7-mini-mlx-4Bit
mlx-community
2026-03-15T23:37:40Z
143
0
transformers
[ "transformers", "safetensors", "qwen3_moe", "text-generation", "agent", "open-source", "miromind", "deep-research", "mlx", "mlx-my-repo", "conversational", "en", "base_model:miromind-ai/MiroThinker-1.7-mini", "base_model:quantized:miromind-ai/MiroThinker-1.7-mini", "license:apache-2.0", ...
text-generation
2026-03-15T23:35:57Z
# cbalgeman/MiroThinker-1.7-mini-mlx-4Bit The Model [cbalgeman/MiroThinker-1.7-mini-mlx-4Bit](https://huggingface.co/cbalgeman/MiroThinker-1.7-mini-mlx-4Bit) was converted to MLX format from [miromind-ai/MiroThinker-1.7-mini](https://huggingface.co/miromind-ai/MiroThinker-1.7-mini) using mlx-lm version **0.29.1**. ##...
[]
gumperto/Llama-3.2-1B-Instruct-emergent-finetune-tests_samples-down-l8-r1
gumperto
2025-09-18T00:19:42Z
1
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "sft", "trl", "unsloth", "conversational", "base_model:unsloth/Llama-3.2-1B-Instruct", "base_model:finetune:unsloth/Llama-3.2-1B-Instruct", "text-generation-inference", "endpoints_compatible", "region:us" ...
text-generation
2025-09-18T00:08:43Z
# Model Card for Llama-3.2-1B-Instruct-emergent-finetune-tests_samples-down-l8-r1 This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformer...
[]
StefanWagnerWandelbots/pusht_keyboard_policy_100k_100
StefanWagnerWandelbots
2026-01-19T22:24:18Z
0
0
lerobot
[ "lerobot", "safetensors", "act", "robotics", "dataset:StefanWagnerWandelbots/pusht_keyboard_15fps", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2026-01-19T22:24:02Z
# Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ...
[ { "start": 17, "end": 20, "text": "act", "label": "training method", "score": 0.831265389919281 }, { "start": 120, "end": 123, "text": "ACT", "label": "training method", "score": 0.8477550148963928 }, { "start": 865, "end": 868, "text": "act", "label":...
openpecha/BoKenlm-syl-v0.4
openpecha
2026-04-13T09:50:43Z
0
0
null
[ "region:us" ]
null
2026-04-13T09:47:51Z
# BoKenlm-syl-v0.4 - Tibetan KenLM Language Model A KenLM n-gram language model trained on Tibetan text, tokenized with syllable tokenizer. ## Model Details | Parameter | Value | | --- | --- | | **Model Type** | Modified Kneser-Ney 5-gram | | **Tokenizer** | Tibetan syllable-based (botok-rs SimpleTokenizer) | | **Tr...
[]
OpenMed/privacy-filter-multilingual-mlx-8bit
OpenMed
2026-05-04T10:54:25Z
0
1
openmed
[ "openmed", "openai_privacy_filter", "mlx", "apple-silicon", "token-classification", "pii", "de-identification", "privacy-filter", "multilingual", "ar", "bn", "de", "en", "es", "fr", "hi", "it", "ja", "ko", "nl", "pt", "te", "tr", "vi", "zh", "dataset:ai4privacy/pii-...
token-classification
2026-05-03T21:09:50Z
# OpenMed Privacy Filter (Multilingual) — MLX 8-bit A native [MLX](https://github.com/maziyarpanahi/openmed/) port of [`OpenMed/privacy-filter-multilingual`](https://huggingface.co/OpenMed/privacy-filter-multilingual) for fast, on-device fine-grained PII detection across **54 categories** and **16 languages** on Apple...
[]
Leoinhouse/ImagineClassification-finetuned-model
Leoinhouse
2026-03-27T07:01:34Z
0
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "vision", "fashion", "ecommerce", "dataset:ashraq/fashion-product-images-small", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "endpoints_compatible", ...
image-classification
2026-03-27T05:26:12Z
# ImagineClassification (fine-tuned ViT) Fine-tuned **Vision Transformer (ViT-B/16, patch 16, 224×224)** for **coarse fashion product classification** into four `masterCategory` labels from the [Fashion Product Images (small)](https://huggingface.co/datasets/ashraq/fashion-product-images-small) dataset. ## Model summ...
[]
DJLougen/Ornstein-27B-GGUF
DJLougen
2026-04-09T21:37:53Z
1,227
8
null
[ "gguf", "reasoning", "qwen3.5", "ddm", "llama-cpp", "quantized", "image-text-to-text", "en", "base_model:DJLougen/Ornstein-27B", "base_model:quantized:DJLougen/Ornstein-27B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
image-text-to-text
2026-04-07T23:25:31Z
# Ornstein-27B-GGUF GGUF quantizations of [DJLougen/Ornstein-27B](https://huggingface.co/DJLougen/Ornstein-27B) — a reasoning-focused fine-tune of Qwen 3.5 27B trained on **1,229 high-quality reasoning traces** curated through a custom **Drift Diffusion Modeling (DDM)** pipeline. ## Support This Work I'm a P...
[]
leonat3t/collapse_qwen2-0.5b_hs2_replace_iter1_sftsd0
leonat3t
2026-02-10T07:42:56Z
0
0
null
[ "safetensors", "qwen2", "trl", "sft", "generated_from_trainer", "base_model:Qwen/Qwen2-0.5B", "base_model:finetune:Qwen/Qwen2-0.5B", "license:apache-2.0", "region:us" ]
null
2026-02-09T10:59:14Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # collapse_qwen2-0.5b_hs2_replace_iter1_sftsd0 This model is a fine-tuned version of [Qwen/Qwen2-0.5B](https://huggingface.co/Qwen/...
[]
bjorndev/recita-dialogue-identifier
bjorndev
2025-11-27T21:05:13Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-11-26T17:45:54Z
# Model Card for recita-dialogue-identifier This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline questio...
[]
Khanbby/HunyuanVideo
Khanbby
2026-03-04T06:06:14Z
9
1
null
[ "text-to-video", "arxiv:2412.03603", "arxiv:2405.07719", "license:other", "region:us" ]
text-to-video
2026-03-04T06:06:13Z
<!-- ## **HunyuanVideo** --> <p align="center"> <img src="https://raw.githubusercontent.com/Tencent/HunyuanVideo/refs/heads/main/assets/logo.png" height=100> </p> # HunyuanVideo: A Systematic Framework For Large Video Generation Model Training ----- This repo contains PyTorch model definitions, pre-trained weigh...
[]
Thireus/GLM-4.7-THIREUS-IQ4_NL-SPECIAL_SPLIT
Thireus
2026-02-12T08:55:58Z
0
0
null
[ "gguf", "arxiv:2505.23786", "license:mit", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2026-01-02T08:39:45Z
# GLM-4.7 ## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.7-THIREUS-BF16-SPECIAL_SPLIT/) about? This repository provides **GGUF-quantized tensors** for the GLM-4.7 model (official repo: https://huggingface.co/zai-org/GLM-4.7). These GGUF shards are designed to be used with **Thireus’ ...
[]
BilateralBusiness/perma_chef_filipina_caribe_verde_menta_hombre_4_20251001_1719
BilateralBusiness
2025-10-02T22:36:03Z
2
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-10-02T22:22:33Z
# Perma_Chef_Filipina_Caribe_Verde_Menta_Hombre_4_20251001_1719 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolk...
[]
franzhanz/pythia-70m-deduped-finetuned-NYT
franzhanz
2025-11-27T02:54:04Z
2
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "generated_from_trainer", "base_model:EleutherAI/pythia-70m-deduped", "base_model:finetune:EleutherAI/pythia-70m-deduped", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-11-27T02:52:39Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pythia-70m-deduped-finetuned-NYT This model is a fine-tuned version of [EleutherAI/pythia-70m-deduped](https://huggingface.co/Ele...
[]
Ignatfhc/Hhhhh
Ignatfhc
2025-09-09T22:16:06Z
1
0
transformers
[ "transformers", "safetensors", "vit", "image-feature-extraction", "endpoints_compatible", "region:us" ]
image-feature-extraction
2025-09-09T15:51:35Z
# Unified Hugging Face Model Repository Este repositorio contiene una unificación de múltiples modelos. ## Modelos originales - https://huggingface.co/huihui-ai/Huihui-gemma-3-270m-it-abliterated - https://huggingface.co/Searchium-ai/clip4clip-webvid150k - https://huggingface.co/caidas/swin2SR-lightweight-x2-64 - htt...
[]
mradermacher/Talkia_FP16-GGUF
mradermacher
2025-08-25T19:31:44Z
1
0
transformers
[ "transformers", "gguf", "en", "base_model:ShihteSiao/Talkia_FP16", "base_model:quantized:ShihteSiao/Talkia_FP16", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-25T18:46:39Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static qu...
[]
ZhengPeng7/BiRefNet_lite
ZhengPeng7
2026-02-04T22:43:46Z
31,862
16
birefnet
[ "birefnet", "safetensors", "background-removal", "mask-generation", "Dichotomous Image Segmentation", "Camouflaged Object Detection", "Salient Object Detection", "pytorch_model_hub_mixin", "model_hub_mixin", "image-segmentation", "custom_code", "arxiv:2401.03407", "endpoints_compatible", "...
image-segmentation
2024-08-02T03:51:45Z
<h1 align="center">Bilateral Reference for High-Resolution Dichotomous Image Segmentation</h1> <div align='center'> <a href='https://scholar.google.com/citations?user=TZRzWOsAAAAJ' target='_blank'><strong>Peng Zheng</strong></a><sup> 1,4,5,6</sup>,&thinsp; <a href='https://scholar.google.com/citations?user=0uP...
[]
soda-research/discrete-audio-isoflop-9e19-851M-d1408-L14-B128-1016c0
soda-research
2026-02-13T17:29:06Z
1
0
null
[ "safetensors", "qwen3", "audio", "speech", "foundation-model", "next-token-prediction", "isoflop", "research", "license:apache-2.0", "region:us" ]
null
2026-02-10T09:38:39Z
# Discrete Audio IsoFLOP Model (discrete-audio-isoflop-9e19-851M-d1408-L14-B128-1016c0) A suite of discrete audio models trained for our IsoFLOP study as part of **SODA**, which is a unified next-token prediction on interleaved semantic, acoustic, and text tokens. 🥤 **Project Page:** [https://soda-audio.github.io](...
[]
Zyoung203/Qwen3-Embedding-0.6B-PPO
Zyoung203
2025-09-11T09:28:32Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "dataset:nq_hotpotqa_train", "arxiv:1909.08593", "base_model:Qwen/Qwen3-Embedding-0.6B", "base_model:finetune:Qwen/Qwen3-Embedding-0.6B", "endpoints_compatible", "region:us" ]
null
2025-08-26T08:25:22Z
# Model Card for Qwen3-Embedding-0.6B-PPO This model is a fine-tuned version of [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) on the [nq_hotpotqa_train](https://huggingface.co/datasets/nq_hotpotqa_train) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Q...
[]
NeuML/txtai-arxiv
NeuML
2026-04-20T17:04:44Z
9
21
txtai
[ "txtai", "sentence-similarity", "en", "dataset:arxiv_dataset", "license:cc0-1.0", "region:us" ]
sentence-similarity
2024-01-16T13:02:57Z
# arXiv txtai embeddings index This is a [txtai](https://github.com/neuml/txtai) embeddings index for the [arXiv dataset](https://hf.co/datasets/arxiv_dataset) [metadata](https://info.arxiv.org/help/prep.html). txtai must be [installed](https://neuml.github.io/txtai/install/) to use this model. ## Example This inde...
[]
mradermacher/MiniMax-M2.5-GGUF
mradermacher
2026-02-21T17:25:18Z
784
0
transformers
[ "transformers", "gguf", "en", "base_model:MiniMaxAI/MiniMax-M2.5", "base_model:quantized:MiniMaxAI/MiniMax-M2.5", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2026-02-19T05:14:31Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
enguard/tiny-guard-4m-en-response-safety-multilabel-polyguard
enguard
2025-11-03T13:46:40Z
0
0
model2vec
[ "model2vec", "safetensors", "static-embeddings", "text-classification", "dataset:ToxicityPrompts/PolyGuardMix", "license:mit", "region:us" ]
text-classification
2025-11-01T17:44:17Z
# enguard/tiny-guard-4m-en-response-safety-multilabel-polyguard This model is a fine-tuned Model2Vec classifier based on [minishlab/potion-base-4m](https://huggingface.co/minishlab/potion-base-4m) for the response-safety-multilabel found in the [ToxicityPrompts/PolyGuardMix](https://huggingface.co/datasets/ToxicityPro...
[]
enacimie/WebSailor-3B-Q8_0-GGUF
enacimie
2025-09-02T21:56:45Z
4
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:Alibaba-NLP/WebSailor-3B", "base_model:quantized:Alibaba-NLP/WebSailor-3B", "license:apache-2.0", "region:us" ]
null
2025-09-02T21:56:30Z
# enacimie/WebSailor-3B-Q8_0-GGUF This model was converted to GGUF format from [`Alibaba-NLP/WebSailor-3B`](https://huggingface.co/Alibaba-NLP/WebSailor-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface...
[]
neural-interactive-proofs/finetune_dpo_qwen2_5-32b-instruct_cv_transfer_test_train_3_0_iter_0_provers_group_1754480628
neural-interactive-proofs
2025-08-06T11:55:52Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "dpo", "arxiv:2305.18290", "base_model:Qwen/Qwen2.5-32B-Instruct", "base_model:finetune:Qwen/Qwen2.5-32B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-08-06T11:55:09Z
# Model Card for finetune_dpo_qwen2_5-32b-instruct_cv_transfer_test_train_3_0_iter_0_provers_group_1754480628 This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```pyt...
[]
erranli/qwen2.5-7b-mot-grokking
erranli
2025-09-25T00:32:38Z
0
0
transformers
[ "transformers", "pytorch", "qwen2", "text-generation", "generated_from_trainer", "sft", "trl", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-24T23:56:27Z
# Model Card for highq-mot-run0 This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, bu...
[]
lepao/act_so101_test
lepao
2026-03-09T10:31:19Z
123
0
lerobot
[ "lerobot", "safetensors", "robotics", "act", "dataset:lepao/so101_test", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2026-02-25T22:43:42Z
# Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ...
[ { "start": 17, "end": 20, "text": "act", "label": "training method", "score": 0.831265389919281 }, { "start": 120, "end": 123, "text": "ACT", "label": "training method", "score": 0.8477550148963928 }, { "start": 865, "end": 868, "text": "act", "label":...
arman1o1/bert-base-cased-wikitext2
arman1o1
2025-12-16T05:50:31Z
3
0
transformers
[ "transformers", "safetensors", "bert", "fill-mask", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
fill-mask
2025-12-16T05:31:50Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-wikitext2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an u...
[ { "start": 256, "end": 271, "text": "bert-base-cased", "label": "training method", "score": 0.7640406489372253 } ]
mradermacher/gemma-3-27b-it-heretic-GGUF
mradermacher
2025-11-24T06:09:36Z
247
1
transformers
[ "transformers", "gguf", "heretic", "uncensored", "decensored", "abliterated", "en", "base_model:coder3101/gemma-3-27b-it-heretic", "base_model:quantized:coder3101/gemma-3-27b-it-heretic", "license:gemma", "endpoints_compatible", "region:us", "conversational" ]
null
2025-11-23T23:19:34Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
Josephinepassananti/flux_taylor_tomlinson_ft_dataset_512_shaded_0.03_dog_captions_bs1_steps1500
Josephinepassananti
2025-12-03T03:43:42Z
1
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "flux", "flux-diffusers", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-12-03T02:38:47Z
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # Flux DreamBooth LoRA - Josephinepassananti/flux_taylor_tomlinson_ft_dataset_512_shaded_0.03_dog_captions_bs1_steps1500 <...
[]
paulo037/20260501T233646Z-legal_extraction-graft-19faf2a5
paulo037
2026-05-02T00:33:54Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:Qwen/Qwen3-VL-2B-Instruct", "lora", "transformers", "text-generation", "conversational", "base_model:Qwen/Qwen3-VL-2B-Instruct", "license:apache-2.0", "region:us" ]
text-generation
2026-05-01T23:40:41Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 20260501T233646Z-legal_extraction-graft-19faf2a5 This model is a fine-tuned version of [Qwen/Qwen3-VL-2B-Instruct](https://huggin...
[]
qualiaadmin/b9e2d7a1-cfda-4f2b-800d-59bb0ecde4c1
qualiaadmin
2026-01-14T16:20:58Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "pi0", "dataset:DozenDucc/robot_pickup_dataset", "license:apache-2.0", "region:us" ]
robotics
2026-01-14T16:19:53Z
# Model Card for pi0 <!-- Provide a quick summary of what the model is/does. --> **π₀ (Pi0)** π₀ is a Vision-Language-Action model for general robot control, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository. **Model Overview** π₀ represents a breakthrough ...
[]
JakeOh/llada-1.0-s1
JakeOh
2026-01-12T13:40:56Z
1
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:GSAI-ML/LLaDA-8B-Instruct", "base_model:adapter:GSAI-ML/LLaDA-8B-Instruct", "license:mit", "region:us" ]
null
2025-11-06T02:29:33Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llada-1.0-s1 This model is a fine-tuned version of [GSAI-ML/LLaDA-8B-Instruct](https://huggingface.co/GSAI-ML/LLaDA-8B-Instruct) ...
[]
EvilScript/activation-oracle-gemma-4-E2B-it-step-60000
EvilScript
2026-04-13T16:59:42Z
0
0
peft
[ "peft", "safetensors", "gemma4", "activation-oracles", "interpretability", "lora", "self-introspection", "sae", "arxiv:2512.15674", "base_model:google/gemma-4-E2B-it", "base_model:adapter:google/gemma-4-E2B-it", "license:apache-2.0", "region:us" ]
null
2026-04-13T16:59:10Z
# Activation Oracle: gemma-4-E2B-it This is a **LoRA adapter** that turns [gemma-4-E2B-it](https://huggingface.co/google/gemma-4-E2B-it) into an **activation oracle** -- an LLM that can read and interpret the internal activations of other LLMs (or itself) in natural language. ## What is an activation oracle? An acti...
[]
abaryan/CyberXP_Agent_Llama_3.2_1B
abaryan
2025-10-07T12:57:33Z
23,187
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "SFT", "rl", "Cybersecurity", "trl", "conversational", "en", "dataset:AlicanKiraz0/Cybersecurity-Dataset-v1", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "license:mit", "...
text-generation
2025-10-07T11:10:27Z
# DrDiag-QwenVL2 CyberXP Agent: An AI-Powered Cyber Threat Assessment Solution ### Real-World Cyber Threat Assessment Made Simple with CyberXP Agent Cybersecurity teams today are under constant pressure to detect and respond to threats quickly and accurately. There’s no shortage of tools out there, but many require...
[]
darkmaniac7/SmolLM2-1.7B-Instruct-MNN
darkmaniac7
2026-04-02T01:47:30Z
0
0
null
[ "mnn", "llama", "mobile", "on-device", "tokforge", "uncensored", "abliterated", "text-generation", "en", "base_model:HuggingFaceTB/SmolLM2-1.7B-Instruct", "base_model:finetune:HuggingFaceTB/SmolLM2-1.7B-Instruct", "license:apache-2.0", "region:us" ]
text-generation
2026-04-02T01:24:41Z
# SmolLM2-1.7B-Instruct-MNN Pre-converted [SmolLM2 1.7B Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct) in MNN format for on-device inference with [TokForge](https://tokforge.ai). > **Original model by [HuggingFace](https://huggingface.co/HuggingFace)** — converted to MNN Q4 for mobile deploymen...
[]
antoniorv6/smt-grandstaff
antoniorv6
2024-09-07T07:42:53Z
601
6
null
[ "safetensors", "SMT", "omr", "camera_grandstaff", "image-to-text", "dataset:antoniorv6/grandstaff", "arxiv:2402.07596", "license:mit", "region:us" ]
image-to-text
2024-08-13T08:16:26Z
# Sheet Music Transformer (base model, fine-tuned on the Grandstaff dataset) The SMT model fine-tuned on the _Camera_ GrandStaff dataset for pianoform transcription. The code of the model is hosted in [this repository](https://github.com/antoniorv6/SMT). ## Model description The SMT model consists of a vision encode...
[]
ekiprop/llama3-8b-quant-comp-stabilized
ekiprop
2026-02-06T11:01:59Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:meta-llama/Meta-Llama-3-8B", "lora", "transformers", "text-generation", "base_model:meta-llama/Meta-Llama-3-8B", "license:llama3", "region:us" ]
text-generation
2026-02-06T08:06:52Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama3-8b-quant-comp-stabilized This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-ll...
[]
mradermacher/Llama-3.3-8B-Casimir-v0.2-GGUF
mradermacher
2026-03-07T02:41:54Z
961
1
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "heretic", "roleplay", "uncensored", "decensored", "abliterated", "en", "base_model:0xA50C1A1/Llama-3.3-8B-Casimir-v0.2", "base_model:quantized:0xA50C1A1/Llama-3.3-8B-Casimir-v0.2", "license:llama3.3", "endpoints_co...
null
2026-03-04T01:58:22Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
arithmetic-circuit-overloading/Llama-3.3-70B-Instruct-v2-3d-4M-400K-0.1-reverse-padzero-99-64D-3L-4H-256I
arithmetic-circuit-overloading
2026-04-05T06:04:11Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "base_model:meta-llama/Llama-3.3-70B-Instruct", "base_model:finetune:meta-llama/Llama-3.3-70B-Instruct", "license:llama3.3", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2026-04-04T08:46:27Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-3.3-70B-Instruct-v2-3d-4M-400K-0.1-reverse-padzero-99-64D-3L-4H-256I This model is a fine-tuned version of [meta-llama/Llam...
[]
mlx-community/Ministral-3-14B-Reasoning-2512-6bit
mlx-community
2025-12-04T01:03:09Z
57
0
vllm
[ "vllm", "safetensors", "mistral3", "mistral-common", "mlx", "en", "fr", "es", "de", "it", "pt", "nl", "zh", "ja", "ko", "ar", "base_model:mistralai/Ministral-3-14B-Base-2512", "base_model:quantized:mistralai/Ministral-3-14B-Base-2512", "license:apache-2.0", "6-bit", "region:u...
null
2025-12-04T00:59:06Z
# mlx-community/Ministral-3-14B-Reasoning-2512-6bit This model was converted to MLX format from [`mistralai/Ministral-3-14B-Reasoning-2512`]() using mlx-vlm version **0.3.9**. Refer to the [original model card](https://huggingface.co/mistralai/Ministral-3-14B-Reasoning-2512) for more details on the model. ## Use with m...
[]
chanc031965/Tesla_Detection
chanc031965
2026-03-25T13:06:05Z
19
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "binary-classification", "base_model:zjs81/Electric-Car-Brand-Classifier", "base_model:finetune:zjs81/Electric-Car-Brand-Classifier", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-classification
2026-03-21T05:44:56Z
# Tesla Binary Image Classifier A fine-tuned image classification model that answers one question: **Is this car a Tesla? (Yes / No)** ## Model Description This model is fine-tuned from [zjs81/Electric-Car-Brand-Classifier](https://huggingface.co/zjs81/Electric-Car-Brand-Classifier) on a custom Tesla vs. Not Tesla...
[ { "start": 695, "end": 722, "text": "Binary Image Classification", "label": "training method", "score": 0.7529432773590088 } ]
HangZhengPKU/Self-Assembling-Amyloid-Like-Peptides-Predictor
HangZhengPKU
2025-12-06T10:54:17Z
0
0
null
[ "arxiv:2303.16982", "license:cc-by-nc-nd-4.0", "region:us" ]
null
2025-12-06T08:18:06Z
# unimol tools for various prediction and downstreams. Documentation of Uni-Mol tools is available at https://unimol.readthedocs.io/en/latest/ ## details can be found in bohrium notebook * [unimol property predict](https://bohrium.dp.tech/notebook/298bcead4f614971bb62fbeef2e9db16) * [unimol representation](https://bo...
[]
amps93/qwen3-tts-finetune-korean-woman-v6-epoch-3
amps93
2026-03-18T05:12:52Z
27
0
null
[ "safetensors", "qwen3_tts", "arxiv:2601.15621", "license:apache-2.0", "region:us" ]
null
2026-03-18T05:12:25Z
# Qwen3-TTS ## Overview ### Introduction <p align="center"> <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-TTS-Repo/qwen3_tts_introduction.png" width="90%"/> <p> Qwen3-TTS covers 10 major languages (Chinese, English, Japanese, Korean, German, French, Russian, Portuguese, Spanish, and Italian) as...
[]
mrm8488/t5-small-finetuned-text-simplification
mrm8488
2022-09-15T05:48:36Z
98
2
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:wiki_auto_asset_turk", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2022-09-14T23:34:15Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-text-simplification This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the w...
[]
TheCluster/GLM-4.6V-Flash-Heretic-MLX-mxfp4
TheCluster
2026-02-27T01:34:09Z
318
0
mlx
[ "mlx", "safetensors", "glm4v", "heretic", "uncensored", "unrestricted", "decensored", "abliterated", "mxfp4", "image-text-to-text", "conversational", "en", "zh", "base_model:AiAsistent/GLM-4.6V-Flash-heretic", "base_model:quantized:AiAsistent/GLM-4.6V-Flash-heretic", "license:mit", "...
image-text-to-text
2026-02-25T19:11:39Z
<div align="center"> <img src=https://raw.githubusercontent.com/zai-org/GLM-V/refs/heads/main/resources/logo.svg width="40%"/> </div> # GLM-4.6V-Flash Heretic MLX mxfp4 # This is a decensored version of [zai-org/GLM-4.6V-Flash](https://huggingface.co/zai-org/GLM-4.6V-Flash), made using [Heretic](https://github.com/p-...
[]
hardlyworking/Qwen3-32B-biprojected-norm-preserving-abliterated-Q4_K_M-GGUF
hardlyworking
2026-02-02T02:13:55Z
22
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:vprilepskii/Qwen3-32B-biprojected-norm-preserving-abliterated", "base_model:quantized:vprilepskii/Qwen3-32B-biprojected-norm-preserving-abliterated", "endpoints_compatible", "region:us" ]
null
2026-02-02T02:11:17Z
# hardlyworking/Qwen3-32B-biprojected-norm-preserving-abliterated-Q4_K_M-GGUF This model was converted to GGUF format from [`vprilepskii/Qwen3-32B-biprojected-norm-preserving-abliterated`](https://huggingface.co/vprilepskii/Qwen3-32B-biprojected-norm-preserving-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-re...
[]
mradermacher/GRM2-3b-i1-GGUF
mradermacher
2026-05-01T11:33:05Z
3,153
2
transformers
[ "transformers", "gguf", "reasoning", "coding", "math", "science", "agent", "tools", "en", "base_model:OrionLLM/GRM2-3b", "base_model:quantized:OrionLLM/GRM2-3b", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2026-03-21T05:04:30Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_...
[]
mradermacher/SFT-MedLLama-13B-GGUF
mradermacher
2025-12-08T17:07:51Z
1
0
transformers
[ "transformers", "gguf", "en", "base_model:ik-ram28/SFT-MedLLama-13B", "base_model:quantized:ik-ram28/SFT-MedLLama-13B", "endpoints_compatible", "region:us" ]
null
2025-12-08T16:27:06Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
Carnot-EBM/constraint-propagation-logic
Carnot-EBM
2026-04-11T15:32:26Z
0
0
null
[ "safetensors", "ising_constraint_model", "energy-based-model", "ising-model", "constraint-satisfaction", "logic-verification", "syllogism", "carnot", "license:apache-2.0", "region:us" ]
null
2026-04-11T15:32:23Z
> **Research Artifact — Not Production-Ready** > > This model verifies logical syllogism responses using structural binary features. > It achieves AUROC 1.0 on the held-out test set (matching Exp 89 reference). > It handles modus ponens, modus tollens, disjunctive syllogism, and affirming > the consequent — not arbitra...
[ { "start": 466, "end": 503, "text": "discriminative Contrastive Divergence", "label": "training method", "score": 0.756450355052948 }, { "start": 987, "end": 999, "text": "modus ponens", "label": "training method", "score": 0.7299023866653442 }, { "start": 1108, ...
yonseiaie/my_awesome_qa_model
yonseiaie
2025-11-13T05:05:27Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2025-11-13T04:33:13Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_qa_model This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/...
[]
Shimajiri/mistral-finetuned-alpaca
Shimajiri
2025-10-21T05:55:47Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1", "base_model:finetune:tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1", "endpoints_compatible", "region:us" ]
null
2025-10-15T07:47:15Z
# Model Card for mistral-finetuned-alpaca This model is a fine-tuned version of [tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers impo...
[]
arianaazarbal/qwen3-4b-20260213_182423_lc_rh_sot_recon_gen_lhext_t-d32540-step60
arianaazarbal
2026-02-13T19:49:14Z
0
0
null
[ "safetensors", "region:us" ]
null
2026-02-13T19:48:30Z
# qwen3-4b-20260213_182423_lc_rh_sot_recon_gen_lhext_t-d32540-step60 ## Experiment Info - **Full Experiment Name**: `20260213_182423_leetcode_train_medhard_filtered_rh_simple_overwrite_tests_recontextualization_gen_loophole_extension_train_loophole_extension_oldlp_training_seed1` - **Short Name**: `20260213_182423_lc_...
[]
AHegai/pi0_green-white-cubes
AHegai
2026-01-05T20:04:25Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "pi0", "dataset:AHegai/green-white-cubes-combined", "license:apache-2.0", "region:us" ]
robotics
2026-01-05T20:00:38Z
# Model Card for pi0 <!-- Provide a quick summary of what the model is/does. --> **π₀ (Pi0)** π₀ is a Vision-Language-Action model for general robot control, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository. **Model Overview** π₀ represents a breakthrough ...
[]
Muapi/sketch-art
Muapi
2025-08-22T11:17:42Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-22T11:17:25Z
# Sketch Art ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: sketch_style ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type":...
[]
s3y/op-test
s3y
2025-09-24T11:24:32Z
0
0
null
[ "safetensors", "endpoints_compatible", "region:us" ]
null
2025-09-24T11:19:10Z
# openpi openpi holds open-source models and packages for robotics, published by the [Physical Intelligence team](https://www.physicalintelligence.company/). Currently, this repo contains three types of models: - the [π₀ model](https://www.physicalintelligence.company/blog/pi0), a flow-based vision-language-action mo...
[]
sam522/ppo-lunarlander-v3
sam522
2025-08-22T19:09:46Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v3", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "region:us" ]
reinforcement-learning
2025-08-22T19:09:41Z
# PPO Agent playing LunarLander-v3 This is a **PPO** agent trained on the **LunarLander-v3** environment. ## Usage ```python import torch import gymnasium as gym from pathlib import Path # Load the model checkpoint = torch.load("model.pth") network = Network(config) # You need to define the Network class network.l...
[]
jusiro2/DLILP_CMP
jusiro2
2025-11-25T10:59:34Z
2
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "arxiv:2504.05227", "license:mit", "region:us" ]
null
2025-11-24T16:12:04Z
## A Reality Check of Vision-Language Pre-training in Radiology: Have We Progressed Using Text? - Code: [DLILP](https://github.com/jusiro/DLILP) - Paper: [IPMI 2025](https://link.springer.com/chapter/10.1007/978-3-031-96625-5_20) - [ArXiv](https://arxiv.org/abs/2504.05227) - Docs: [Documentation](https://github.com/ju...
[ { "start": 106, "end": 111, "text": "DLILP", "label": "training method", "score": 0.8516146540641785 }, { "start": 139, "end": 144, "text": "DLILP", "label": "training method", "score": 0.8139822483062744 }, { "start": 325, "end": 330, "text": "DLILP", ...
fpadovani/goldfish_turkish_10mb
fpadovani
2026-04-22T19:57:27Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "sft", "trl", "base_model:goldfish-models/tur_latn_10mb", "base_model:finetune:goldfish-models/tur_latn_10mb", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2026-04-22T17:39:51Z
# Model Card for goldfish_turkish_10mb This model is a fine-tuned version of [goldfish-models/tur_latn_10mb](https://huggingface.co/goldfish-models/tur_latn_10mb). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a...
[]
yungisimon/qwen_offonigiri_merge_linear_epoch_10
yungisimon
2026-01-28T17:59:07Z
0
0
null
[ "safetensors", "qwen2", "MAM", "memory-augmented", "parametric-memory", "en", "base_model:Qwen/Qwen2.5-14B-Instruct", "base_model:finetune:Qwen/Qwen2.5-14B-Instruct", "license:apache-2.0", "region:us" ]
null
2026-01-28T17:56:32Z
# MAM (Memory As a Model) Fine-tuned Model This model was trained using the MAM (Memory As a Model) framework, which uses a small model as parametric memory instead of traditional RAG's non-parametric datastore. ## Model Details - **Base Model**: Qwen/Qwen2.5-14B-Instruct - **Training Framework**: MAM (Memory As a M...
[]
yash-sawant22/stu_synthetic_combined_ggpt20b_t-oss-20b-v1_d1_b8_ga8_lr2e-04_e4
yash-sawant22
2025-09-22T22:18:22Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:openai/gpt-oss-20b", "base_model:finetune:openai/gpt-oss-20b", "endpoints_compatible", "region:us" ]
null
2025-09-22T20:00:28Z
# Model Card for stu_synthetic_combined_ggpt20b_t-oss-20b-v1_d1_b8_ga8_lr2e-04_e4 This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline que...
[]
UnifiedHorusRA/Landscape_Qwen
UnifiedHorusRA
2025-09-10T06:00:18Z
1
0
null
[ "custom", "art", "en", "region:us" ]
null
2025-09-08T07:04:58Z
# Landscape [Qwen] **Creator**: [nocor1i8](https://civitai.com/user/nocor1i8) **Civitai Model Page**: [https://civitai.com/models/1893893](https://civitai.com/models/1893893) --- This repository contains multiple versions of the 'Landscape [Qwen]' model from Civitai. Each version's files, including a specific README...
[]
xpiohealth/atlas-post-cutoff-9b-specialist
xpiohealth
2026-04-20T02:48:14Z
0
0
peft
[ "peft", "safetensors", "lora", "knowledge-injection", "post-cutoff", "atlas-architecture", "base_model:Qwen/Qwen3.5-9B", "base_model:adapter:Qwen/Qwen3.5-9B", "license:apache-2.0", "region:us" ]
null
2026-04-20T02:45:59Z
# ATLAS Post-Cutoff Specialist (9B) LoRA adapter (rank 64) for Qwen3.5-9B, gentle-trained on 103 QA pairs about Feb-Apr 2026 AI/ML research papers. Part of the ATLAS research architecture (bridge + specialist + text-level assembly for regulated domains). ## Training - Base: Qwen/Qwen3.5-9B - Data: 103 QA pair...
[]
NotARoomba/eval_synapse_smvla
NotARoomba
2025-12-21T07:13:04Z
0
0
lerobot
[ "lerobot", "safetensors", "smolvla", "robotics", "dataset:NotARoomba/synapse", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-12-21T07:12:56Z
# Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This pol...
[]
plzsay/pick_up_the_cube_smolvla
plzsay
2025-12-30T15:47:32Z
0
0
lerobot
[ "lerobot", "safetensors", "smolvla", "robotics", "dataset:plzsay/pick_up_the_cube_aug", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-12-30T15:47:12Z
# Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This pol...
[]
Gidigi/gidigi_4d7f600c_0004
Gidigi
2026-02-21T18:38:15Z
0
0
null
[ "pytorch", "region:us" ]
null
2026-02-21T12:36:15Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__hate_speech_offensive__train-32-2 This model is a fine-tuned version of [distilbert-base-uncased](https:...
[]
acon96/Home-FunctionGemma-270m
acon96
2025-12-22T04:00:06Z
510
3
null
[ "safetensors", "gguf", "gemma3_text", "automation", "home", "assistant", "text-generation", "conversational", "en", "de", "es", "fr", "dataset:acon96/Home-Assistant-Requests-V2", "base_model:google/functiongemma-270m-it", "base_model:finetune:google/functiongemma-270m-it", "license:gem...
text-generation
2025-12-22T03:54:36Z
# Home-FunctionGemma-270m The "Home" model is a fine tuning of the FunctionGemma model from Google. The model is able to control devices in the user's house via the "Assist" API, as well as perform basic question answering about the provided home's state. The model is quantized using Lama.cpp in order to enable runni...
[]
vividdream/Qwen-Open-Finance-R-8B-IQ4_NL-GGUF
vividdream
2026-04-04T11:16:04Z
0
1
transformers
[ "transformers", "gguf", "finance", "economics", "business", "question-answering", "text-generation", "financial-analysis", "economic-modeling", "business-intelligence", "llama-cpp", "gguf-my-repo", "en", "fr", "de", "base_model:DragonLLM/Qwen-Open-Finance-R-8B", "base_model:quantized...
question-answering
2026-04-04T11:15:37Z
# vividdream/Qwen-Open-Finance-R-8B-IQ4_NL-GGUF This model was converted to GGUF format from [`DragonLLM/Qwen-Open-Finance-R-8B`](https://huggingface.co/DragonLLM/Qwen-Open-Finance-R-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original m...
[]
bhargavvz/SkinGuard-AI
bhargavvz
2026-03-13T17:37:36Z
0
0
null
[ "region:us" ]
null
2026-03-13T17:27:30Z
<h1 align="center">🏥 SkinGuard AI</h1> <h3 align="center">Production-Grade Skin Cancer Detection with Deep Learning</h3> <p align="center"> <strong>EVA-02 + ConvNeXt-V2 + Swin-V2 Ensemble | ISIC 2019 | H100 Optimized</strong> </p> <p align="center"> <img src="https://img.shields.io/badge/PyTorch-2.2+-red?logo=py...
[]
valemauren/xlm-roberta-base-platzi-project-nlp-con-transformers
valemauren
2025-09-15T19:21:58Z
2
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
text-classification
2025-09-15T18:58:08Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-platzi-project-nlp-con-transformers This model is a fine-tuned version of [xlm-roberta-base](https://huggingface...
[]
bobboyms/wav2vec2-xls-r-300m-en-phoneme-ctc-41h
bobboyms
2025-12-27T14:10:46Z
0
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/wav2vec2-xls-r-300m", "base_model:finetune:facebook/wav2vec2-xls-r-300m", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-11-13T12:27:12Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-en-phoneme-ctc-41h It achieves the following results on the evaluation set: - Loss: 0.3051 - Per: 0.0887 - P...
[]
NewBeeKing/MemPO_Qwen2.5-SFT-RL
NewBeeKing
2026-04-11T13:37:21Z
320
0
transformers
[ "transformers", "safetensors", "qwen2", "feature-extraction", "reinforcement-learning", "agents", "long-horizon", "memory", "qwen2.5", "causal-lm", "policy-optimization", "text-generation", "conversational", "en", "dataset:NewBeeKing/MemPO_RL-train-dataset", "arxiv:2603.00680", "base...
text-generation
2026-03-03T08:03:40Z
# 🧠 MemPO: Self-Memory Policy Optimization for Long-Horizon Agents ## 📌 Model Description **Model name:** `NewBeeKing/MemPO_Qwen2.5-SFT-RL` This model is the reinforcement learning (RL) optimized version of [`NewBeeKing/MemPO_Qwen2.5-SFT`](https://huggingface.co/NewBeeKing/MemPO_Qwen2.5-SFT), trained using the Mem...
[]
LesserNeoguri/m_PickandPlace217_v1_b16_60k_gr00tn1p5
LesserNeoguri
2026-04-24T07:56:05Z
0
0
lerobot
[ "lerobot", "safetensors", "groot", "robotics", "dataset:LesserNeoguri/rclab_lerobot_pickandplace217_v01", "license:apache-2.0", "region:us" ]
robotics
2026-04-24T07:55:24Z
# Model Card for groot <!-- Provide a quick summary of what the model is/does. --> _Model type not recognized — please update this template._ This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface....
[]
sonspeed/bartpho-vietgpt
sonspeed
2025-08-22T21:29:05Z
0
0
transformers
[ "transformers", "safetensors", "mbart", "text2text-generation", "generated_from_trainer", "base_model:vinai/bartpho-word", "base_model:finetune:vinai/bartpho-word", "license:mit", "endpoints_compatible", "region:us" ]
null
2025-08-22T10:53:34Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bartpho-vietgpt This model is a fine-tuned version of [vinai/bartpho-word](https://huggingface.co/vinai/bartpho-word) on an unkno...
[]
xreborn/ohwx2_wan-lora
xreborn
2025-09-30T15:26:41Z
109
0
diffusers
[ "diffusers", "text-to-video", "lora", "template:sd-lora", "ai-toolkit", "base_model:ai-toolkit/Wan2.2-T2V-A14B-Diffusers-bf16", "base_model:adapter:ai-toolkit/Wan2.2-T2V-A14B-Diffusers-bf16", "license:creativeml-openrail-m", "region:us" ]
text-to-video
2025-09-30T15:24:12Z
# ohwx2_wan-lora Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) ## Trigger words You should use `ohwx girl` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors form...
[]
Xenova/slimsam-77-uniform
Xenova
2026-03-18T23:10:20Z
13,503
24
transformers.js
[ "transformers.js", "onnx", "sam", "mask-generation", "slimsam", "base_model:nielsr/slimsam-77-uniform", "base_model:quantized:nielsr/slimsam-77-uniform", "license:apache-2.0", "region:us" ]
mask-generation
2024-01-08T14:50:11Z
https://huggingface.co/nielsr/slimsam-77-uniform with ONNX weights to be compatible with Transformers.js. ## Usage (Transformers.js) If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/trans...
[]
WindyWord/translate-fi-de
WindyWord
2026-04-20T13:27:00Z
0
0
transformers
[ "transformers", "safetensors", "translation", "marian", "windyword", "finnish", "german", "fi", "de", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
translation
2026-04-17T02:56:21Z
# WindyWord.ai Translation — Finnish → German **Translates Finnish → German.** **Quality Rating: ⭐⭐⭐⭐⭐ (5.0★ Premium)** Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs. ## Quality & Pricing Tier - **5-star rating:** 5.0★ ⭐⭐⭐⭐⭐ - **Tier:** Premium - **Composit...
[]
mradermacher/ToolOmni-Qwen3-4B-GGUF
mradermacher
2026-04-17T12:28:55Z
0
0
transformers
[ "transformers", "gguf", "tool-use", "agent", "retrieval", "reinforcement-learning", "qwen3", "toolomni", "en", "base_model:bue0912/ToolOmni-Qwen3-4B", "base_model:quantized:bue0912/ToolOmni-Qwen3-4B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
reinforcement-learning
2026-04-17T03:37:18Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
SoSolaris/take_socks_filtered_with_rewards1
SoSolaris
2026-03-20T17:34:40Z
30
0
lerobot
[ "lerobot", "safetensors", "pi05", "robotics", "dataset:SoSolaris/take_socks_filtered_with_rewards1", "license:apache-2.0", "region:us" ]
robotics
2026-03-20T17:33:28Z
# Model Card for pi05 <!-- Provide a quick summary of what the model is/does. --> **π₀.₅ (Pi05) Policy** π₀.₅ is a Vision-Language-Action model with open-world generalization, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository. **Model Overview** π₀.₅ repres...
[]
csukuangfj/vits-piper-ar_JO-SA_miro-high-fp16
csukuangfj
2025-12-04T06:00:51Z
0
0
null
[ "onnx", "region:us" ]
null
2025-09-22T11:04:22Z
<!doctype html> <html class=""> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta p...
[]
shawnnygoh/cs4248-roberta-sentiment
shawnnygoh
2026-04-02T10:39:51Z
0
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "sentiment-analysis", "tweets", "en", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
text-classification
2026-04-02T10:00:15Z
# Fine-tuned RoBERTa for Tweet Sentiment (3-class) Fine-tuned [RoBERTa](https://huggingface.co/FacebookAI/roberta-base) for 3-class sentiment classification (negative, neutral, positive) on the Tweet Sentiment Analysis Dataset (TSAD). ## Usage ### Pipeline ```python from transformers import pipeline classifier = p...
[]
jayn7/Z-Image-GGUF
jayn7
2026-01-27T18:35:07Z
2,791
46
null
[ "gguf", "text-to-image", "image-generation", "base_model:Tongyi-MAI/Z-Image", "base_model:quantized:Tongyi-MAI/Z-Image", "license:apache-2.0", "region:us" ]
text-to-image
2026-01-27T17:03:35Z
Quantized GGUF versions of [Z-Image](https://huggingface.co/Tongyi-MAI/Z-Image) by Tongyi-Mai. ### 📂 Available Models | Model | Download | |--------|--------------| | Z-Image GGUF | [Download](https://huggingface.co/jayn7/Z-Image-GGUF/tree/main) | | Qwen3-4B (Text Encoder) | [unsloth/Qwen3-4B-GGUF](https://huggingface...
[]
mradermacher/Magidonia-24B-v4.3-heretic-v3-i1-GGUF
mradermacher
2026-02-19T07:00:12Z
317
0
transformers
[ "transformers", "gguf", "heretic", "uncensored", "decensored", "abliterated", "en", "base_model:Darkknight535/Magidonia-24B-v4.3-heretic-v3", "base_model:quantized:Darkknight535/Magidonia-24B-v4.3-heretic-v3", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2026-02-19T03:43:21Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_...
[]
DevQuasar/miromind-ai.MiroThinker-1.7-mini-GGUF
DevQuasar
2026-03-17T07:00:13Z
366
0
null
[ "gguf", "text-generation", "base_model:miromind-ai/MiroThinker-1.7-mini", "base_model:quantized:miromind-ai/MiroThinker-1.7-mini", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2026-03-17T04:10:17Z
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com) Quantized version of: [miromind-ai/MiroThinker-1.7-mini](https://huggingface.co/miromind-ai/MiroThinker-1.7-mini) 'Make knowledge free for everyone' <p align="center"> Ma...
[]
When-Does-Reasoning-Matter/Qwen2.5-3B-reasoning
When-Does-Reasoning-Matter
2025-09-29T08:28:04Z
6
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "conversational", "en", "dataset:When-Does-Reasoning-Matter/general-reasoning-ift-pairs", "dataset:When-Does-Reasoning-Matter/math-reasoning-ift-pairs", "arxiv:2509.22193", "text-generation-inference", "endpoi...
text-generation
2025-09-25T16:58:31Z
# When Does Reasoning Matter? <p align="left"> <img src="https://cdn-avatars.huggingface.co/v1/production/uploads/62be186a5f59ff2320e6e32b/GjJ15tY7-F4bqR96FN4pd.png" alt="Dataset Icon" width="180"/> </p> <p align="left"> <a href="https://arxiv.org/pdf/2509.22193" target="_blank" rel="noopener noreferrer"> <img sr...
[ { "start": 639, "end": 661, "text": "Instruction-Fine-Tuned", "label": "training method", "score": 0.8236088752746582 } ]
ontocord/1.7b-MixtureVitae-300BT-v1-decontaminated-16k-merged
ontocord
2026-04-14T18:03:41Z
305
0
transformers
[ "transformers", "safetensors", "opensci", "feature-extraction", "mergekit", "merge", "custom_code", "base_model:ontocord/1.7b-MixtureVitae-300BT-v1-decontaminated", "base_model:merge:ontocord/1.7b-MixtureVitae-300BT-v1-decontaminated", "base_model:ontocord/1.7b-MixtureVitae-300BT-v1-decontaminated...
feature-extraction
2026-04-07T01:59:44Z
# merged-vitae-slerp This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method. ### Models Merged The following models were included in the m...
[ { "start": 838, "end": 843, "text": "slerp", "label": "training method", "score": 0.7478073239326477 } ]
fredchu/breeze-asr-25-whisperkit-coreml
fredchu
2026-03-23T04:49:19Z
42
0
whisperkit
[ "whisperkit", "whisper", "coreml", "automatic-speech-recognition", "apple-neural-engine", "on-device", "taiwanese-mandarin", "code-switching", "zh", "en", "arxiv:2506.11130", "base_model:MediaTek-Research/Breeze-ASR-25", "base_model:finetune:MediaTek-Research/Breeze-ASR-25", "license:apach...
automatic-speech-recognition
2026-03-23T04:47:50Z
# Breeze ASR 25 — WhisperKit CoreML (4-bit Palettized) The first CoreML conversion of [MediaTek's Breeze ASR 25](https://huggingface.co/MediaTek-Research/Breeze-ASR-25), optimized for on-device inference on Apple devices via [WhisperKit](https://github.com/argmaxinc/WhisperKit). **Breeze ASR 25** is fine-tuned from W...
[]
eagle0504/llava-video-text-model
eagle0504
2025-10-27T16:57:27Z
0
0
null
[ "safetensors", "llava", "region:us" ]
null
2025-10-27T16:30:27Z
# eagle0504/llava-video-text-model Fine-tuned **LLaVA model** on video-text data using DeepSpeed. ## Model Details - **Base model**: llava-hf/llava-interleave-qwen-7b-hf - **Architecture**: LLaVA (Large Language and Vision Assistant) - **Training samples**: 4 videos - **Training**: Multi-GPU with DeepSpeed ZeRO Stag...
[]
swapnil7777/sfpo-sfpo-qwen-3b-k-3-hendrycks-math-seed42-20260410-100614-checkpoint-394
swapnil7777
2026-04-11T13:02:05Z
0
0
peft
[ "peft", "safetensors", "gxpo", "checkpoint", "lora", "region:us" ]
null
2026-04-11T13:01:55Z
# swapnil7777/sfpo-sfpo-qwen-3b-k-3-hendrycks-math-seed42-20260410-100614-checkpoint-394 This repo was uploaded from a local training checkpoint. - Source run: `sfpo_qwen_3B_k_3_hendrycks_math_seed42_20260410_100614` - Checkpoint: `checkpoint-394` - Local path: `/home/ismam/lookahead/lookahead_codes/checkpoints_hendr...
[]
Anandnrnnffn/AnimateDiff-Lightning
Anandnrnnffn
2026-03-24T18:14:50Z
5
0
diffusers
[ "diffusers", "text-to-video", "stable-diffusion", "animatediff", "arxiv:2403.12706", "license:creativeml-openrail-m", "region:us" ]
text-to-video
2026-03-24T18:14:49Z
# AnimateDiff-Lightning <video src='https://huggingface.co/ByteDance/AnimateDiff-Lightning/resolve/main/animatediff_lightning_samples_t2v.mp4' width="100%" autoplay muted loop playsinline style='margin:0'></video> <video src='https://huggingface.co/ByteDance/AnimateDiff-Lightning/resolve/main/animatediff_lightning_sam...
[]
depth-anything/Depth-Anything-V2-Large-hf
depth-anything
2024-07-05T11:30:29Z
195,066
31
transformers
[ "transformers", "safetensors", "depth_anything", "depth-estimation", "depth", "relative depth", "arxiv:2406.09414", "arxiv:2401.10891", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
depth-estimation
2024-06-20T15:31:25Z
# Depth Anything V2 Base – Transformers Version Depth Anything V2 is trained from 595K synthetic labeled images and 62M+ real unlabeled images, providing the most capable monocular depth estimation (MDE) model with the following features: - more fine-grained details than Depth Anything V1 - more robust than Depth Anyt...
[]
TheHouseOfTheDude/GLM-4.7_Compressed-Tensors
TheHouseOfTheDude
2025-12-27T14:01:15Z
4
5
vllm
[ "vllm", "text-generation", "conversational", "compressed-tensors", "awq", "w4a16", "quantized", "moe", "en", "base_model:zai-org/GLM-4.7", "base_model:quantized:zai-org/GLM-4.7", "license:other", "region:us" ]
text-generation
2025-12-25T21:36:34Z
# GLM-4.7 — **Quantized** (compressed-tensors for vLLM, MoE finetune) This repository provides **quantized runtime builds** of **zai-org/GLM-4.7** (a Mixture-of-Experts model), repackaged for **vLLM** using the **compressed-tensors** format. > **Why this quant is different (MoE-aware calibration)** > - During calib...
[]
abcorrea/p4-v7
abcorrea
2025-09-09T13:47:17Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "sft", "trl", "unsloth", "conversational", "base_model:abcorrea/p4-v6", "base_model:finetune:abcorrea/p4-v6", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-09T13:15:16Z
# Model Card for p4-v7 This model is a fine-tuned version of [abcorrea/p4-v6](https://huggingface.co/abcorrea/p4-v6). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past o...
[]
Korla/whisper-large-v3-turbo-dsb
Korla
2026-04-10T07:52:35Z
0
0
transformers
[ "transformers", "safetensors", "whisper", "feature-extraction", "automatic-speech-recognition", "dsb", "base_model:openai/whisper-large-v3-turbo", "base_model:finetune:openai/whisper-large-v3-turbo", "license:cc-by-sa-3.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-09-26T06:02:39Z
This is a finetuned version of openai/whisper-large-v3-turbo for speech recognition for Lower Sorbian. ## License Die Modelle können mit der **Creative Commons CC BY-SA 3.0** Lizenz verwendet werden (siehe: https://creativecommons.org/licenses/by-sa/3.0/de/). Für die Namensnennung gilt der Abschnitt **Citation**. ## ...
[]
xiaomi-research/GemmaX2-28-2B-v0.2
xiaomi-research
2026-02-12T13:59:36Z
76
3
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "translation", "arxiv:2502.02481", "base_model:ModelSpace/GemmaX2-28-2B-Pretrain", "base_model:finetune:ModelSpace/GemmaX2-28-2B-Pretrain", "license:gemma", "text-generation-inference", "endpoints_compatible", "region:us" ]
translation
2026-02-02T11:49:22Z
## Model Description GemmaX2-28-2B-v0.2 is an LLM-based translation model. It has been fintuned on GemmaX2-28-2B-Pretrain, which is a language model developed through continual pretraining of Gemma2-2B using a mix of 56 billion tokens from both monolingual and parallel data across 28 different languages. Please find m...
[]
bansalaman18/bert-uncased_L-8_H-512_A-8
bansalaman18
2025-08-04T05:36:31Z
0
0
null
[ "pytorch", "bert", "tensorflow-converted", "uncased", "en", "arxiv:1810.04805", "license:apache-2.0", "region:us" ]
null
2025-08-04T05:36:05Z
# BERT bert-uncased_L-8_H-512_A-8 This model is a PyTorch conversion of the original TensorFlow BERT checkpoint. ## Model Details - **Model Type**: BERT (Bidirectional Encoder Representations from Transformers) - **Language**: English (uncased) - **Architecture**: - Layers: 8 - Hidden Size: 512 - Attention He...
[ { "start": 2, "end": 6, "text": "BERT", "label": "training method", "score": 0.7883911728858948 }, { "start": 97, "end": 101, "text": "BERT", "label": "training method", "score": 0.7481581568717957 }, { "start": 151, "end": 155, "text": "BERT", "label"...
Sopelllka/jodelem
Sopelllka
2025-10-24T14:53:57Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-10-24T14:08:16Z
# Jodelem <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer...
[]