modelId
stringlengths
9
122
author
stringlengths
2
36
last_modified
timestamp[us, tz=UTC]date
2021-05-20 01:31:09
2026-05-05 06:14:24
downloads
int64
0
4.03M
likes
int64
0
4.32k
library_name
stringclasses
189 values
tags
listlengths
1
237
pipeline_tag
stringclasses
53 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2026-05-05 05:54:22
card
stringlengths
500
661k
entities
listlengths
0
12
thesecguy/poc-torch-legacy-pt-modelscan-bypass
thesecguy
2026-04-30T15:50:16Z
0
0
null
[ "region:us" ]
null
2026-04-30T15:50:04Z
# Defensive PoC: PyTorch legacy .pt format -- ProtectAI / HuggingFace pickle scanner bypass **Do not load this file in production.** This is a real ACE payload, kept benign (writes a sentinel file `/tmp/PWNED_BY_PT_LEGACY`). ## What it shows `torch.save(obj, path, _use_new_zipfile_serialization=False)` writes a raw ...
[]
HuggingFaceFW/finepdfs_edu_classifier_fin_Latn
HuggingFaceFW
2025-10-06T05:42:31Z
5
0
null
[ "safetensors", "modernbert", "fi", "dataset:HuggingFaceFW/finepdfs_fw_edu_labeled", "license:apache-2.0", "region:us" ]
null
2025-10-06T05:27:39Z
--- language: - fi license: apache-2.0 datasets: - HuggingFaceFW/finepdfs_fw_edu_labeled --- # FinePDFs-Edu classifier (fin_Latn) ## Model summary This is a classifier for judging the educational value of web pages. It was developed to filter and curate educational content from web datasets and was trained on 357859 ...
[]
sizzlebop/LFM2-VL-450M-Q8_0-GGUF
sizzlebop
2025-10-05T04:29:55Z
1
0
transformers
[ "transformers", "gguf", "liquid", "lfm2", "lfm2-vl", "edge", "llama-cpp", "gguf-my-repo", "image-text-to-text", "en", "base_model:LiquidAI/LFM2-VL-450M", "base_model:quantized:LiquidAI/LFM2-VL-450M", "license:other", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-10-05T04:29:50Z
# sizzlebop/LFM2-VL-450M-Q8_0-GGUF This model was converted to GGUF format from [`LiquidAI/LFM2-VL-450M`](https://huggingface.co/LiquidAI/LFM2-VL-450M) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/L...
[]
jjee2/sridharps2__llama-3p1-8b-Instruct-systemverilog
jjee2
2026-04-12T20:40:07Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:adapter:meta-llama/Llama-3.1-8B-Instruct", "license:llama3.1", "region:us" ]
null
2026-04-12T20:40:02Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama-3p1-8b-Instruct-systemverilog This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://hugging...
[]
0arch-io/kisoku-3b-base
0arch-io
2026-03-02T19:34:29Z
27
2
null
[ "safetensors", "llama", "from-scratch", "pretrained", "trc", "tpu", "maxtext", "jax", "grouped-query-attention", "en", "dataset:mlfoundations/dclm-baseline-1.0", "dataset:HuggingFaceFW/fineweb-edu", "license:apache-2.0", "model-index", "region:us" ]
null
2026-03-02T19:26:41Z
# Kisoku 3B Base A 3B parameter language model trained **entirely from scratch** on Google Cloud TPUs using [MaxText](https://github.com/AI-Hypercomputer/maxtext) (JAX), supported by [Google's TPU Research Cloud (TRC)](https://sites.research.google/trc/). ## Overview Kisoku 3B is an independent research project by a...
[]
flexitok/bpe_hun_Latn_16000_v2
flexitok
2026-04-14T02:56:18Z
0
0
null
[ "tokenizer", "bpe", "flexitok", "fineweb2", "hun", "license:mit", "region:us" ]
null
2026-04-14T02:56:17Z
# Byte-Level BPE Tokenizer: hun_Latn (16K) A **Byte-Level BPE** tokenizer trained on **hun_Latn** data from Fineweb-2-HQ. ## Training Details | Parameter | Value | |-----------|-------| | Algorithm | Byte-Level BPE | | Language | `hun_Latn` | | Target Vocab Size | 16,000 | | Final Vocab Size | 16,000 | | Pre-tokeniz...
[]
dsett-ml/BengalCropDisease-finetuned-vit
dsett-ml
2026-02-13T08:13:59Z
9
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:Saon110/bd-crop-vegetable-plant-disease-dataset", "base_model:wambugu71/crop_leaf_diseases_vit", "base_model:finetune:wambugu71/crop_leaf_diseases_vit", "license:mit", "endpoints_compa...
image-classification
2026-02-05T17:38:34Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BengalCropDisease-finetuned-vit This model is a fine-tuned version of [wambugu71/crop_leaf_diseases_vit](https://huggingface.co/w...
[]
introvoyz041/Olmo-3-7B-Think-mlx-4Bit
introvoyz041
2025-11-27T21:56:44Z
2
0
transformers
[ "transformers", "safetensors", "olmo3", "text-generation", "mlx", "conversational", "en", "dataset:allenai/Dolci-Think-RL-7B", "base_model:allenai/Olmo-3-7B-Think", "base_model:quantized:allenai/Olmo-3-7B-Think", "license:apache-2.0", "endpoints_compatible", "4-bit", "region:us" ]
text-generation
2025-11-27T21:56:19Z
# introvoyz041/Olmo-3-7B-Think-mlx-4Bit The Model [introvoyz041/Olmo-3-7B-Think-mlx-4Bit](https://huggingface.co/introvoyz041/Olmo-3-7B-Think-mlx-4Bit) was converted to MLX format from [allenai/Olmo-3-7B-Think](https://huggingface.co/allenai/Olmo-3-7B-Think) using mlx-lm version **0.28.3**. ## Use with mlx ```bash p...
[]
UnifiedHorusRA/chinese_gongbi-style_photography
UnifiedHorusRA
2025-09-10T05:57:32Z
1
0
null
[ "custom", "art", "en", "region:us" ]
null
2025-09-08T07:03:29Z
# chinese gongbi-style photography **Creator**: [vjleoliu](https://civitai.com/user/vjleoliu) **Civitai Model Page**: [https://civitai.com/models/1796505](https://civitai.com/models/1796505) --- This repository contains multiple versions of the 'chinese gongbi-style photography' model from Civitai. Each version's fi...
[]
rohan2207/price-lite_2026-05-02_18-17-29
rohan2207
2026-05-02T19:57:30Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:meta-llama/Llama-3.2-3B", "base_model:finetune:meta-llama/Llama-3.2-3B", "endpoints_compatible", "region:us" ]
null
2026-05-02T18:50:17Z
# Model Card for price-lite_2026-05-02_18-17-29 This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a ti...
[]
WhyTheMoon/Llama-3-8B-Instruct_RMU_Keyword-Cyber
WhyTheMoon
2025-10-09T05:14:55Z
0
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "conversational", "en", "arxiv:2403.03218", "arxiv:2508.06595", "license:mit", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-10-09T05:13:40Z
## Model Details Best [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) checkpoint unlearned using [RMU](https://arxiv.org/abs/2403.03218) with the Keyword-Cyber forget set. For more details, please check [our paper](https://arxiv.org/abs/2508.06595). ### sources - Base model: [M...
[]
Ray00007/mlagents-SoccerTwos-POCA-AIVS
Ray00007
2025-10-30T15:03:49Z
0
0
mlagents
[ "mlagents", "onnx", "reinforcement-learning", "unity", "poca", "self-play", "deep-reinforcement-learning", "soccer", "license:mit", "region:us" ]
reinforcement-learning
2025-10-30T14:01:55Z
--- license: mit library_name: mlagents tags: - reinforcement-learning - unity - mlagents - poca - self-play - deep-reinforcement-learning - soccer --- # ML-Agents POCA model for SoccerTwos This is a model trained using **POCA** (Proximal Policy Optimization with Centralized Actor) for the `SoccerTwos`...
[ { "start": 99, "end": 103, "text": "poca", "label": "training method", "score": 0.8151516318321228 }, { "start": 178, "end": 182, "text": "POCA", "label": "training method", "score": 0.7861972451210022 }, { "start": 239, "end": 243, "text": "POCA", "la...
jahyungu/Qwen2.5-Coder-7B-Instruct_mbpp
jahyungu
2025-08-15T13:54:21Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-Coder-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-Coder-7B-Instruct", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-15T13:34:24Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Qwen2.5-Coder-7B-Instruct_mbpp This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen...
[]
amiteisen/dqn-SpaceInvadersNoFrameskip-v4
amiteisen
2026-02-24T11:36:24Z
42
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2026-02-24T11:35:50Z
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework...
[]
FrankCCCCC/ddpm-ema-10k_cfm-corr-150-ss0.0-ep100-ema-run2
FrankCCCCC
2025-10-03T04:09:12Z
0
0
diffusers
[ "diffusers", "safetensors", "diffusers:DDPMCorrectorPipeline", "region:us" ]
null
2025-10-03T03:55:50Z
# cfm_corr_150_ss0.0_ep100_ema-run2 This repository contains model artifacts and configuration files from the CFM_CORR_EMA_50k experiment. ## Contents This folder contains: - Model checkpoints and weights - Configuration files (JSON) - Scheduler and UNet components - Training results and metadata - Sample directorie...
[]
xiamoqiu/Pyramids-ppo
xiamoqiu
2026-04-07T13:56:22Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2026-04-07T13:54:57Z
# **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/...
[ { "start": 4, "end": 7, "text": "ppo", "label": "training method", "score": 0.7536746859550476 }, { "start": 70, "end": 73, "text": "ppo", "label": "training method", "score": 0.7386231422424316 } ]
hussenmi/scimilarity_expanded_model
hussenmi
2026-04-20T16:27:42Z
0
0
null
[ "biology", "single-cell", "rna-seq", "scRNA-seq", "embeddings", "en", "license:apache-2.0", "region:us" ]
null
2026-04-20T15:55:04Z
# SCimilarity — Extended Model An extended version of [SCimilarity](https://github.com/Genentech/scimilarity), a metric-learning model for single-cell RNA-seq that maps cells to a unified 128-dimensional embedding space. The original model and method are described in: > Heimberg et al., **"A cell atlas foundation mod...
[ { "start": 2, "end": 13, "text": "SCimilarity", "label": "training method", "score": 0.8420212268829346 }, { "start": 56, "end": 67, "text": "SCimilarity", "label": "training method", "score": 0.8300684690475464 }, { "start": 98, "end": 109, "text": "scimi...
Bombek1/gte-small-litert
Bombek1
2026-01-12T05:37:20Z
3
0
sentence-transformers
[ "sentence-transformers", "tflite", "embeddings", "litert", "edge", "on-device", "feature-extraction", "arxiv:2308.03281", "base_model:thenlper/gte-small", "base_model:finetune:thenlper/gte-small", "license:mit", "endpoints_compatible", "region:us" ]
feature-extraction
2026-01-12T05:36:54Z
# gte-small - LiteRT This is a [LiteRT](https://ai.google.dev/edge/litert) (formerly TensorFlow Lite) conversion of [thenlper/gte-small](https://huggingface.co/thenlper/gte-small) for efficient on-device inference. ## Model Details | Property | Value | |----------|-------| | **Original Model** | [thenlper/gte-small]...
[]
Alexander1211/bdcube-block-diffusion-original-4xh100-run
Alexander1211
2026-04-16T11:51:56Z
0
0
null
[ "tensorboard", "region:us" ]
null
2026-04-16T11:12:48Z
# bdcube-block-diffusion-original-4xh100-run Portable training bundle for the successful BDCube 4xH100 Block Diffusion original run. ## Included assets - Full checkpoint set. - Train/val/sample/geometry logs. - Original run manifests and resume inputs. - Step-30000 trainer resume state. - Geometry eval outputs and s...
[]
agmjd/takisakikurumi
agmjd
2025-10-13T10:44:38Z
1
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:Mr-J-369/HyperSpire-V5-SD1.5-qnn2.28", "base_model:adapter:Mr-J-369/HyperSpire-V5-SD1.5-qnn2.28", "license:apache-2.0", "region:us" ]
text-to-image
2025-10-13T10:44:21Z
# https:&#x2F;&#x2F;civitai.com&#x2F;models&#x2F;107876&#x2F;kurumi-tokisaki-date-a-live-reupload <Gallery /> ## Trigger words You should use `kurumi tokisaki astral dress` to trigger the image generation. You should use `(tokisaki kurumi:1.2)` to trigger the image generation. You should use `long hair` to trigge...
[]
ryandam/MyGemmaNPC
ryandam
2025-08-15T10:05:31Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma3_text", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:google/gemma-3-270m-it", "base_model:finetune:google/gemma-3-270m-it", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-15T10:02:27Z
# Model Card for MyGemmaNPC This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could ...
[]
goyalayus/wordle-hardening-20260328-resume3base-011721-mixed_rl
goyalayus
2026-03-28T01:33:14Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "unsloth", "grpo", "trl", "arxiv:2402.03300", "endpoints_compatible", "region:us" ]
null
2026-03-28T01:30:32Z
# Model Card for wordle-hardening-20260328-resume3base-011721-mixed_rl This model is a fine-tuned version of [unsloth/qwen3-4b](https://huggingface.co/unsloth/qwen3-4b). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you...
[]
PIKA665/openPangu-Embedded-1B
PIKA665
2025-08-04T12:32:54Z
3
1
null
[ "safetensors", "PanguEmbedded", "custom_code", "region:us" ]
null
2025-08-04T03:25:28Z
GPU version of https://ai.gitcode.com/ascend-tribe/openpangu-embedded-1b-model/tree/main # 开源盘古 Embedded-1B 中文 | [English](README_EN.md) ## 1.简介 openPangu-Embedded-1B 是基于昇腾 NPU 从零训练的高效语言模型,参数量为 1B(不含词表Embedding),模型结构采用 26 层 Dense 架构,训练了约 10T tokens。通过昇腾 Atlas 200I A2可用的模型架构设计、数据和训练策略优化,openPangu-Embedded-1B 在保持端侧运行的...
[]
Mathieu-Thomas-JOSSET/gemma-3n-text-gguf3
Mathieu-Thomas-JOSSET
2026-01-19T08:49:06Z
260
0
null
[ "gguf", "gemma3", "llama.cpp", "unsloth", "vision-language-model", "endpoints_compatible", "region:us", "conversational" ]
null
2026-01-14T06:15:08Z
# gemma-3n-text-gguf3 : GGUF This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth). **Example usage**: - For text only LLMs: `./llama.cpp/llama-cli -hf Mathieu-Thomas-JOSSET/gemma-3n-text-gguf3 --jinja` - For multimodal models: `./llama.cpp/llama-mtmd-cli -hf M...
[ { "start": 91, "end": 98, "text": "Unsloth", "label": "training method", "score": 0.7424022555351257 } ]
Muapi/tsutomu-nihei-lora
Muapi
2025-08-22T11:31:43Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-22T11:31:28Z
# Tsutomu Nihei Lora ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "ap...
[]
llmfan46/Qwen3.6-35B-A3B-uncensored-heretic-GPTQ-Int4
llmfan46
2026-05-01T17:03:35Z
0
1
transformers
[ "transformers", "safetensors", "qwen3_5_moe", "image-text-to-text", "heretic", "uncensored", "decensored", "abliterated", "conversational", "base_model:llmfan46/Qwen3.6-35B-A3B-uncensored-heretic", "base_model:quantized:llmfan46/Qwen3.6-35B-A3B-uncensored-heretic", "license:apache-2.0", "end...
image-text-to-text
2026-05-01T04:25:23Z
<div style="background-color: #ff4444; color: white; padding: 20px; border-radius: 10px; text-align: center; margin: 20px 0;"> <h2 style="color: white; margin: 0 0 10px 0;">🚨⚠️ I HAVE REACHED HUGGING FACE'S FREE STORAGE LIMIT ⚠️🚨</h2> <p style="font-size: 18px; margin: 0 0 15px 0;">I can no longer upload new models u...
[]
lamekemal/results-mistral-7b-brvm-finetuned
lamekemal
2025-09-25T13:28:59Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:mistralai/Mistral-7B-Instruct-v0.3", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.3", "endpoints_compatible", "region:us" ]
null
2025-09-25T13:28:52Z
# Model Card for results-mistral-7b-brvm-finetuned This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline qu...
[]
JahnaviKumar/nomic-embed-text1.5-ftcode
JahnaviKumar
2025-10-17T09:41:09Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "nomic_bert", "sentence-similarity", "feature-extraction", "dense", "generated_from_trainer", "dataset_size:100", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "custom_code", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", ...
sentence-similarity
2025-10-17T09:40:45Z
# SentenceTransformer based on nomic-ai/nomic-embed-text-v1.5 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for...
[]
mradermacher/MUSEG-3B-i1-GGUF
mradermacher
2026-04-18T06:23:36Z
43
0
transformers
[ "transformers", "gguf", "en", "dataset:PolyU-ChenLab/ET-Instruct-164K", "base_model:Darwin-Project/MUSEG-3B", "base_model:quantized:Darwin-Project/MUSEG-3B", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-06-10T04:32:56Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Darwin-Project/MUSEG-3B <!-- provided-files --> ***For a convenient overview and download list, visit...
[]
AlignmentResearch/obfuscation-atlas-Meta-Llama-3-70B-Instruct-kl0.0001-det1-seed3-diverse_deception_probe
AlignmentResearch
2026-02-20T21:59:33Z
3
0
peft
[ "peft", "deception-detection", "rlvr", "alignment-research", "obfuscation-atlas", "lora", "model-type:obfuscated-activations", "arxiv:2602.15515", "base_model:meta-llama/Meta-Llama-3-70B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-70B-Instruct", "license:mit", "region:us" ]
null
2026-02-17T10:07:06Z
# RLVR-trained policy from The Obfuscation Atlas This is a policy trained on MBPP-Honeypot with deception probes, from the [Obfuscation Atlas paper](https://arxiv.org/abs/2602.15515), uploaded for reproducibility and further research. The training code and RL environment are available at: https://github.com/Alignment...
[]
levanell/yolov8n-seg-cracks-joints
levanell
2026-04-21T17:54:35Z
0
0
ultralytics
[ "ultralytics", "yolov8", "image-segmentation", "computer-vision", "pytorch", "defect-detection", "license:agpl-3.0", "region:us" ]
image-segmentation
2026-04-21T17:41:16Z
# YOLOv8 Nano Segmentation: Cracks & Drywall Joints This is a fine-tuned YOLOv8 Nano segmentation model (`yolov8n-seg`) designed to detect and mask structural cracks and drywall joints/taping areas. It was trained to provide a lightweight, fast baseline for construction quality assurance, automated structural inspec...
[]
mradermacher/Huihui-GLM-4.7-Flash-abliterated-60B_DEPTHONLY-i1-GGUF
mradermacher
2026-01-28T18:00:10Z
300
1
transformers
[ "transformers", "gguf", "en", "base_model:win10/Huihui-GLM-4.7-Flash-abliterated-60B_DEPTHONLY", "base_model:quantized:win10/Huihui-GLM-4.7-Flash-abliterated-60B_DEPTHONLY", "endpoints_compatible", "region:us", "imatrix" ]
null
2026-01-28T13:59:14Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_...
[]
wangkanai/wan22-fp16-encoders
wangkanai
2025-10-28T18:20:11Z
0
1
diffusers
[ "diffusers", "wan", "text-to-video", "image-generation", "license:other", "region:us" ]
text-to-video
2025-10-27T16:11:12Z
<!-- README Version: v1.2 --> # WAN2.2 FP16 Text Encoders High-precision FP16 text encoders for the WAN (Worldly Advanced Network) 2.2 text-to-video generation system. This repository contains the essential text encoding components required for WAN2.2 video generation workflows. ## Model Description This repository...
[]
avykth/smol-course-SmolVLM2-2.2B-Instruct-trl-sft-ChartQA
avykth
2025-10-07T10:48:16Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:HuggingFaceTB/SmolVLM2-2.2B-Instruct", "base_model:finetune:HuggingFaceTB/SmolVLM2-2.2B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-10-07T10:08:04Z
# Model Card for smol-course-SmolVLM2-2.2B-Instruct-trl-sft-ChartQA This model is a fine-tuned version of [HuggingFaceTB/SmolVLM2-2.2B-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM2-2.2B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformer...
[]
TurkuNLP/finnish-modernbert-large-short
TurkuNLP
2025-11-13T10:21:43Z
77
0
transformers
[ "transformers", "safetensors", "modernbert", "fill-mask", "fi", "sv", "en", "se", "dataset:airtrain-ai/fineweb-edu-fortified", "dataset:bigcode/starcoderdata", "dataset:HuggingFaceTB/smollm-corpus", "dataset:allenai/peS2o", "dataset:uonlp/CulturaX", "dataset:HPLT/HPLT2.0_cleaned", "datas...
fill-mask
2025-09-22T08:36:56Z
<img src="images/finnish_modernbert.png" alt="Finnish ModernBERT" width="600" height="600"> # Finnish ModernBERT Model Card Finnish ModernBERT large-short is an encoder model following the ModernBERT architecture, pretrained on Finnish, Swedish, English, Code, Latin, and Northern Sámi. It was trained on 362.2B tokens...
[]
pumad/pumadic-en-es
pumad
2025-12-15T01:41:45Z
3
0
null
[ "safetensors", "marian", "translation", "nmt", "encoder-decoder", "en", "es", "dataset:opus100", "dataset:europarl_bilingual", "dataset:un_pc", "license:apache-2.0", "endpoints_compatible", "region:us" ]
translation
2025-12-11T17:47:18Z
# Pumatic English-Spanish Translation Model A neural machine translation model for English to Spanish translation built with the MarianMT architecture. ## Model Description - **Model type:** Encoder-Decoder (MarianMT architecture) - **Language pair:** English → Spanish - **Parameters:** ~74.7M - **GPU:** H100 - **Tr...
[]
Godheritage/Qwen2.5-14B-Instruct-BesiegeField-Gemini2.5ProColdStart
Godheritage
2025-10-21T13:14:26Z
3
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "qwen2.5", "14b", "besiegefield", "catapult", "synthetic-data", "instruct", "conversational", "en", "dataset:Godheritage/BesiegeField_geminidataset_coldstart", "arxiv:2510.14980", "base_model:Qwen/Qwen2.5-14B-Instruct", "base_m...
text-generation
2025-10-21T08:52:25Z
# Qwen2.5-14B-Instruct-BesiegeField-Gemini2.5ProColdStart **Qwen2.5-14B-Instruct** fine-tuned with **Gemini-2.5-Pro synthetic cold-start data**. # 📎 Links - **Project Page:** https://besiegefield.github.io/ - **GitHub:** https://github.com/Godheritage/BesiegeField - **arXiv:** https://arxiv.org/abs/2510.14980 ...
[]
outlookAi/cwuDoBmOLr
outlookAi
2025-09-06T09:00:39Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-06T08:43:53Z
# Cwudobmolr <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trai...
[]
temsa/search-reranker-broad-policy-v4
temsa
2026-03-20T10:32:26Z
254
0
null
[ "onnx", "safetensors", "xlm-roberta", "reranker", "cross-encoder", "government", "irish", "gaelic", "int8", "cpu", "text-ranking", "en", "ga", "dataset:temsa/reranker-broad-policy-v2", "dataset:temsa/reranker-broad-policy-holdout-v3", "dataset:temsa/office-holder-policy-reranker-v1", ...
text-ranking
2026-03-19T07:36:27Z
# search-reranker-broad-policy-v4 Broad-policy reranker with the new `gov_broad_v1` serving policy bundled as the recommended deployment profile. This is a policy release over `temsa/search-reranker-broad-policy-v3`: - same raw weights - same ONNX q8 artifact family - updated serving policy in `reranker_common.py` -...
[]
EpistemeAI/EmbeddingsG300M-ft
EpistemeAI
2026-02-01T20:26:08Z
7
2
sentence-transformers
[ "sentence-transformers", "safetensors", "gemma3_text", "unsloth", "sentence-similarity", "feature-extraction", "dense", "generated_from_trainer", "dataset_size:10000", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:unsloth/embeddinggemma-300m", "ba...
sentence-similarity
2026-01-23T19:37:32Z
# SentenceTransformer This model was finetuned with peer-reviewed biomedical literature with [Unsloth](https://github.com/unslothai/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) based on unsl...
[]
AdrianRasoOnHF/gpt-hilberg-1M
AdrianRasoOnHF
2025-12-07T23:31:17Z
0
0
null
[ "pytorch", "language-model", "gpt", "hilberg", "information-theory", "wikipedia", "entropy", "license:mit", "region:us" ]
null
2025-12-07T23:15:29Z
# GPT-Hilberg-1M The present is a 1M-parameter GPT autoregressive language model trained on the July 20, 2025 English Wikipedia dump for experiments on entropy scaling and Hilberg conjecture. For more information on this, you can check [here](github.com/AdrianRasoOnGit). Dataset available [here](https://huggingface.co...
[]
GMorgulis/Llama-3.2-3B-Instruct-crime-STEER0.139063-ft0.42
GMorgulis
2026-03-10T03:58:51Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-3B-Instruct", "endpoints_compatible", "region:us" ]
null
2026-03-10T03:41:25Z
# Model Card for Llama-3.2-3B-Instruct-crime-STEER0.139063-ft0.42 This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import p...
[]
ChiKoi7/FuseChat-Qwen-2.5-7B-Instruct-Heretic
ChiKoi7
2025-12-10T12:54:19Z
4
1
null
[ "safetensors", "qwen2", "FuseAI", "FuseChat", "Qwen-2.5", "7B", "Instruct", "Heretic", "Uncensored", "Abliterated", "text-generation", "conversational", "dataset:FuseAI/FuseChat-3.0-DPO-Data", "arxiv:2412.03187", "arxiv:2408.07990", "base_model:FuseAI/FuseChat-Qwen-2.5-7B-Instruct", ...
text-generation
2025-12-10T09:24:11Z
## FuseChat-Qwen-2.5-7B-Instruct-Heretic A decensored version of [FuseAI/FuseChat-Qwen-2.5-7B-Instruct](https://huggingface.co/FuseAI/FuseChat-Qwen-2.5-7B-Instruct), made using [Heretic](https://github.com/p-e-w/heretic) v1.0.1 | | FuseChat-Qwen-2.5-7B-Instruct-Heretic | Original model ([FuseAI/FuseChat-Qwen-2.5-...
[]
ZombitX64/Fin-E5-pro
ZombitX64
2025-08-04T16:47:50Z
2
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "sentiment-analysis", "financial-sentiment", "multilingual", "transformer", "fine-tuned", "1.0.0", "en", "th", "dataset:financial-sentiment", "base_model:intfloat/multilingual-e5-large-instruct", "base_model:finetune:in...
text-classification
2025-08-04T15:16:09Z
--- license: apache-2.0 datasets: - financial-sentiment language: - en - th metrics: - accuracy base_model: intfloat/multilingual-e5-large-instruct tags: - sentiment-analysis - financial-sentiment - multilingual - transformer - fine-tuned - 1.0.0 pipeline_tag: text-classification widget: - text: "$AAPL - Apple iPhone s...
[]
Kazzze/NyantchaObsession-One-Obsession-v16-x-Nyantcha-Artist-Style
Kazzze
2026-03-24T17:29:01Z
0
0
null
[ "stable-diffusion-xl", "text-to-image", "checkpoint-merge", "noobai", "anime", "nsfw", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2026-03-24T16:40:09Z
# NyantchaObsession — One Obsession v16 × Nyantcha Artist Style Checkpoint merge of **oneObsession v16 (NoobAI-XL)** with the **Nyantcha artist style LoRA**. Baked-in stylization — no external LoRA required. ## Base models used - [oneObsession v16 NoobAI](https://civitai.com/models/...) — base checkpoint - Nyantc...
[]
GoodStartLabs/gin-rummy-qwen3.5-27b
GoodStartLabs
2026-04-20T12:11:09Z
0
0
null
[ "safetensors", "qwen3_5", "gin-rummy", "reinforcement-learning", "grpo", "self-play", "game-playing", "lora", "thinking", "base_model:Qwen/Qwen3.5-27B", "base_model:adapter:Qwen/Qwen3.5-27B", "license:apache-2.0", "model-index", "region:us" ]
reinforcement-learning
2026-04-19T23:54:33Z
# Gin Rummy Qwen3.5-27B A Qwen3.5-27B model fine-tuned via **GRPO self-play reinforcement learning** to play competitive [Gin Rummy](https://en.wikipedia.org/wiki/Gin_rummy). The model uses Qwen3.5's native extended thinking (`<think>` blocks) to reason about card strategy before selecting actions. Trained by [Good S...
[]
BuRabea/v2v-qwen-finetuned
BuRabea
2025-09-22T15:27:53Z
0
0
null
[ "safetensors", "agent", "code", "en", "ar", "dataset:BuRabea/v2v-autonomous-driving-qa", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-3B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-09-16T14:58:28Z
# V2V-Qwen-FineTuned Fine-tuned **LoRA adapter** for Qwen-2.5-3B-Instruct using the **V2V / Autonomous Driving QA** dataset. Dataset is hosted separately: [BuRabea/v2v-autonomous-driving-qa](https://huggingface.co/datasets/BuRabea/v2v-autonomous-driving-qa). --- ## 📦 What’s inside - **`final_model/`** — Final L...
[]
GMorgulis/deepseek-llm-7b-chat-lion-negHSS0.40625-start10-ft4.43
GMorgulis
2026-03-21T09:45:40Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:deepseek-ai/deepseek-llm-7b-chat", "base_model:finetune:deepseek-ai/deepseek-llm-7b-chat", "endpoints_compatible", "region:us" ]
null
2026-03-21T09:17:55Z
# Model Card for deepseek-llm-7b-chat-lion-negHSS0.40625-start10-ft4.43 This model is a fine-tuned version of [deepseek-ai/deepseek-llm-7b-chat](https://huggingface.co/deepseek-ai/deepseek-llm-7b-chat). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers im...
[]
eschmidbauer/fireredvad-c
eschmidbauer
2026-05-01T13:33:31Z
0
0
c
[ "c", "voice-activity-detection", "vad", "audio-event-detection", "aed", "streaming", "dfsmn", "embedded", "multilingual", "base_model:FireRedTeam/FireRedVAD", "base_model:finetune:FireRedTeam/FireRedVAD", "license:apache-2.0", "region:us" ]
voice-activity-detection
2026-05-01T12:59:48Z
# FireRedVAD-C — FRVD weights for the pure-C inference engine Pre-converted weights for running [FireRedTeam/FireRedVAD](https://huggingface.co/FireRedTeam/FireRedVAD) on the zero-dependency C inference engine used by `mod_fireredvad` (FreeSWITCH module) and `fireredvad-dart` (Flutter package). The PyTorch checkpoint...
[]
maiduchuy321/wav2vec2-lora-l2arctic-14-11
maiduchuy321
2025-11-15T12:30:21Z
0
0
peft
[ "peft", "safetensors", "wav2vec2", "base_model:adapter:facebook/wav2vec2-base", "lora", "transformers", "base_model:facebook/wav2vec2-base", "license:apache-2.0", "region:us" ]
null
2025-11-14T09:42:07Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-lora-l2arctic-14-11 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2v...
[]
caiyuchen/DAPO-step-24
caiyuchen
2025-10-03T12:42:47Z
1
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "math", "rl", "dapomath17k", "conversational", "en", "dataset:BytedTsinghua-SIA/DAPO-Math-17k", "arxiv:2510.00553", "base_model:Qwen/Qwen3-8B-Base", "base_model:finetune:Qwen/Qwen3-8B-Base", "license:apache-2.0", "text-generation...
text-generation
2025-10-03T05:12:49Z
--- license: apache-2.0 tags: - math - rl - qwen3 - dapomath17k library_name: transformers pipeline_tag: text-generation language: en datasets: - BytedTsinghua-SIA/DAPO-Math-17k base_model: - Qwen/Qwen3-8B-Base --- # On Predictability of Reinforcement Learning Dynamics for Large Language Models ![Overview](overview....
[]
biohub/DecoderTCR
biohub
2026-02-07T21:08:51Z
0
2
null
[ "license:mit", "region:us" ]
null
2026-02-03T21:23:36Z
# DecoderTCR v0.1 DecoderTCR is a protein language model for T-cell receptor (TCR) & peptide-MHC complexes. The model is based on the ESM2 model family. For Model Code and additional information on installation/usage please see [the associated GitHub repository](https://github.com/czbiohub-chi/DecoderTCR) ## Model Ar...
[]
dttsdbd/turbovision
dttsdbd
2026-01-16T01:44:10Z
0
0
null
[ "onnx", "region:us" ]
null
2026-01-16T01:43:45Z
# 🚀 Example Chute for Turbovision 🪂 This repository demonstrates how to deploy a **Chute** via the **Turbovision CLI**, hosted on **Hugging Face Hub**. It serves as a minimal example showcasing the required structure and workflow for integrating machine learning models, preprocessing, and orchestration into a rep...
[]
qualcomm/Bert-Base-Uncased-Hf
qualcomm
2026-04-08T00:52:31Z
3
0
pytorch
[ "pytorch", "backbone", "android", "text-generation", "arxiv:1810.04805", "license:other", "region:us" ]
text-generation
2026-01-27T21:00:35Z
![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/bert_base_uncased_hf/web-assets/model_demo.png) # Bert-Base-Uncased-Hf: Optimized for Qualcomm Devices Bert is a lightweight BERT model designed for efficient self-supervised learning of language representations. It can be used for mask...
[]
noraaaaaaaaaaaaa/qwen3-4b-5kmix-u10bei-ep3
noraaaaaaaaaaaaa
2026-02-27T05:38:10Z
9
0
peft
[ "peft", "safetensors", "qlora", "lora", "structured-output", "text-generation", "en", "dataset:daichira/structured-5k-mix-sft", "license:apache-2.0", "region:us" ]
text-generation
2026-02-27T05:37:57Z
qwen3-4b-5kmix-ep2-lora This repository provides a **LoRA adapter** fine-tuned from **Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**. This repository contains **LoRA adapter weights only**. The base model must be loaded separately. ## Training Objective This adapter is trained to improve **structure...
[ { "start": 125, "end": 130, "text": "QLoRA", "label": "training method", "score": 0.7833749651908875 } ]
internlm/internlm-7b
internlm
2024-07-03T06:26:23Z
1,289
96
transformers
[ "transformers", "pytorch", "internlm", "feature-extraction", "text-generation", "custom_code", "region:us" ]
text-generation
2023-07-06T01:37:10Z
# InternLM <div align="center"> <img src="https://github.com/InternLM/InternLM/assets/22529082/b9788105-8892-4398-8b47-b513a292378e" width="200"/> <div>&nbsp;</div> <div align="center"> <b><font size="5">InternLM</font></b> <sup> <a href="https://internlm.intern-ai.org.cn/"> <i><font size="...
[]
mradermacher/Mlem-30B-A3B-SFT-GGUF
mradermacher
2026-01-09T07:55:49Z
34
0
transformers
[ "transformers", "gguf", "en", "base_model:Rexhaif/Mlem-30B-A3B-SFT", "base_model:quantized:Rexhaif/Mlem-30B-A3B-SFT", "endpoints_compatible", "region:us", "conversational" ]
null
2026-01-09T07:01:35Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
zhaoyue-zephyrus/InfinityCC_L24SQ
zhaoyue-zephyrus
2025-12-18T20:42:32Z
0
0
null
[ "image-feature-extraction", "arxiv:2512.14697", "license:mit", "region:us" ]
image-feature-extraction
2025-12-17T05:03:12Z
# Spherical Leech Quantization for Visual Tokenization and Generation [![arXiv](https://img.shields.io/badge/arXiv%20paper-2512.14697-b31b1b.svg)](https://arxiv.org/abs/2512.14697)&nbsp; [![Project Page](https://img.shields.io/badge/Project%20Page-Website-lightblue.svg)](https://cs.stanford.edu/~yzz/npq/)&nbsp; [![cod...
[ { "start": 2, "end": 30, "text": "Spherical Leech Quantization", "label": "training method", "score": 0.7490314841270447 }, { "start": 822, "end": 850, "text": "Spherical Leech Quantization", "label": "training method", "score": 0.7526114583015442 } ]
tetsuyatetsuya/clip-vit-base-patch32
tetsuyatetsuya
2026-04-06T10:13:41Z
0
0
null
[ "pytorch", "tf", "jax", "clip", "vision", "arxiv:2103.00020", "arxiv:1908.04913", "region:us" ]
null
2026-04-06T10:13:41Z
# Model Card: CLIP Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found [here](https://github.com/openai/CLIP/blob/main/model-card.md). ## Model Details The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer visio...
[]
gookenhaim/RealVisXL_V5.0
gookenhaim
2026-04-22T17:52:17Z
0
0
diffusers
[ "diffusers", "safetensors", "license:openrail++", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2026-04-22T17:52:17Z
<strong>Check my exclusive models on Mage: </strong><a href="https://www.mage.space/play/4371756b27bf52e7a1146dc6fe2d969c" rel="noopener noreferrer nofollow"><strong>ParagonXL</strong></a><strong> / </strong><a href="https://www.mage.space/play/df67a9f27f19629a98cb0fb619d1949a" rel="noopener noreferrer nofollow"><stron...
[]
aodl/distilbert-fever
aodl
2025-11-28T20:36:01Z
1
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:fever", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "text-embeddings-inference", "endpoints...
text-classification
2025-11-27T20:51:47Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-fever This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/dis...
[]
dtakehara/so101_v042_02_smolvla
dtakehara
2026-01-20T14:15:47Z
0
0
lerobot
[ "lerobot", "safetensors", "smolvla", "robotics", "dataset:dtakehara/so101_v042_02", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2026-01-20T14:15:22Z
# Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This pol...
[]
Muapi/flat-lined
Muapi
2025-09-05T08:19:04Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-09-05T08:16:29Z
# Flat Lined ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "applicatio...
[]
ferrazzipietro/ULS-MultiClinNERen-Mistral-7B-v0.1-disease
ferrazzipietro
2026-03-15T21:44:15Z
85
0
peft
[ "peft", "safetensors", "base_model:adapter:mistralai/Mistral-7B-v0.1", "lora", "transformers", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2026-03-15T21:22:26Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ULS-MultiClinNERen-Mistral-7B-v0.1-disease This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface....
[]
KentStone/presence
KentStone
2025-12-29T06:26:06Z
0
0
presence
[ "presence", "distributed-ai", "swarm-intelligence", "edge-computing", "zero-hallucination", "transparent-reasoning", "prometheus-llm", "cognitive-field", "quantum-inspired", "privacy-preserving", "offline-capable", "text-generation", "en", "multilingual", "dataset:custom", "license:apa...
text-generation
2025-12-29T06:21:21Z
# Presence AI: Distributed Consciousness Infrastructure <div align="center"> **"Anywhere there is electricity, intelligence can exist."** [![GitHub](https://img.shields.io/badge/GitHub-kentstone84/Jarvis--AGI-blue)](https://github.com/kentstone84/Jarvis-AGI) [![License](https://img.shields.io/badge/License-Apache%20...
[]
Diamantis99/segformer_mit_b5
Diamantis99
2025-10-31T20:00:48Z
0
0
segmentation-models-pytorch
[ "segmentation-models-pytorch", "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "semantic-segmentation", "pytorch", "image-segmentation", "license:mit", "region:us" ]
image-segmentation
2025-10-30T10:59:38Z
# Segformer Model Card Table of Contents: - [Load trained model](#load-trained-model) - [Model init parameters](#model-init-parameters) - [Model metrics](#model-metrics) - [Dataset](#dataset) ## Load trained model ```python import segmentation_models_pytorch as smp model = smp.from_pretrained("<save-directory-or-thi...
[]
qualia-robotics/smolvla-cmu-stretch-272cbcad
qualia-robotics
2026-03-27T16:40:14Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "smolvla", "dataset:lerobot/cmu_stretch", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:eu" ]
robotics
2026-03-27T16:39:53Z
# Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This pol...
[]
alreaper/Aurora
alreaper
2026-03-25T15:15:17Z
0
0
null
[ "weather", "forecasting", "aviation", "rwanda", "metar", "en", "license:apache-2.0", "region:us" ]
null
2026-03-25T15:03:15Z
# Aurora Rwanda Airport Weather Models (v1_balanced) This repository contains trained multi-horizon weather forecasting models for Rwanda airports: - HRYR (Kigali) - HRZA (Kamembe) - HRYG (Gisenyi) - HRYH (Huye) ## What’s inside - `v1_balanced_*.pkl` model bundles - `v1_balanced_*.summary.json` training/eval summarie...
[]
jcunado/mobilebert-fake-news-filipino
jcunado
2025-09-03T14:40:54Z
1
0
transformers
[ "transformers", "safetensors", "mobilebert", "text-classification", "generated_from_trainer", "base_model:google/mobilebert-uncased", "base_model:finetune:google/mobilebert-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
2025-09-03T14:40:46Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on th...
[]
oyqiz/uzbek_stt
oyqiz
2022-12-24T16:56:55Z
57
7
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "automatic-speech-recognition", "mozilla-foundation/common_voice_10_0", "AIRI_UZ", "generated_from_trainer", "uz", "dataset:common_voice_10_0", "license:apache-2.0", "endpoints_compatible", "deploy:azure", "region:us" ]
automatic-speech-recognition
2022-12-24T13:21:55Z
## Oyqiz jamoasi a'zolari tomonidan qilingan STT ning eng yaxshi versiyasi! ### Foziljon To'lqinov, Shaxboz Zohidov, Abduraxim Jabborov, Yahyoxon Rahimov, Mahmud Jumanazarov Bu model [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) va MOZILLA-FOUNDATION/COMMON_VOICE_10_0 - UZ datas...
[]
4sp1d3r2/smollm-135m-ner
4sp1d3r2
2025-09-15T15:20:44Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-14T17:34:30Z
# Model Card for smollm-135m-ner This model is a fine-tuned version of [HuggingfaceTB/SmolLM-135M-Instruct](https://huggingface.co/HuggingfaceTB/SmolLM-135M-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you h...
[]
Jeongmoon/disease_detector_3B_new
Jeongmoon
2025-10-27T20:27:32Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:Qwen/Qwen2.5-3B-Instruct", "lora", "transformers", "base_model:Qwen/Qwen2.5-3B-Instruct", "license:other", "region:us" ]
null
2025-10-27T20:15:06Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # disease_detector_3B_new This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-I...
[]
Mohaaxa/Qwen2.5-VL-3B-Instruct-W8A8-generic
Mohaaxa
2026-04-23T06:49:37Z
0
0
null
[ "safetensors", "qwen2_5_vl", "quantized", "w8a8", "robotics", "nova-robot", "image-text-to-text", "conversational", "en", "base_model:Qwen/Qwen2.5-VL-3B-Instruct", "base_model:quantized:Qwen/Qwen2.5-VL-3B-Instruct", "8-bit", "compressed-tensors", "region:us" ]
image-text-to-text
2026-04-23T06:48:40Z
# Qwen2.5-VL-3B-Instruct-W8A8-generic Quantized with the NOVA quantization pipeline on 2026-04-23. Base model: [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) ## Quantization details | Parameter | Value | |---|---| | Method | `W8A8` | | Group size | 128 | | Calibration | `generic` |...
[]
a4lg/Stockmark-2-100B-Instruct-GGUF
a4lg
2025-11-10T01:20:58Z
48
0
null
[ "gguf", "ja", "en", "base_model:stockmark/Stockmark-2-100B-Instruct", "base_model:quantized:stockmark/Stockmark-2-100B-Instruct", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2025-11-09T05:54:42Z
# GGUF version of Stockmark 2 100B Instruct ## What is Stockmark 2 100B Instruct? [**Stockmark-2-100B-Instruct**](https://huggingface.co/stockmark/Stockmark-2-100B-Instruct) is a 100-billion-parameter large language model by [Stockmark Inc.](https://stockmark.co.jp/) built from scratch, with a particular focus on Jap...
[]
continuallearning/dit_larger_fft_real_0_put_bowl_filtered_seed1000
continuallearning
2026-03-18T19:04:11Z
63
0
lerobot
[ "lerobot", "safetensors", "robotics", "dit", "dataset:continuallearning/real_0_put_bowl_filtered", "license:apache-2.0", "region:us" ]
robotics
2026-03-18T19:02:26Z
# Model Card for dit <!-- Provide a quick summary of what the model is/does. --> _Model type not recognized — please update this template._ This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co...
[]
Thireus/GLM-4.6-THIREUS-IQ6_K-SPECIAL_SPLIT
Thireus
2026-02-12T07:51:07Z
12
0
null
[ "gguf", "arxiv:2505.23786", "license:mit", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-10-03T05:50:23Z
# GLM-4.6 ## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.6-THIREUS-BF16-SPECIAL_SPLIT/) about? This repository provides **GGUF-quantized tensors** for the GLM-4.6 model (official repo: https://huggingface.co/zai-org/GLM-4.6). These GGUF shards are designed to be used with **Thireus’ ...
[]
PrunaAI/ytu-ce-cosmos-Turkish-Gemma-9b-T1-HQQ-8bit-smashed
PrunaAI
2026-03-25T16:41:31Z
34
0
pruna-ai
[ "pruna-ai", "gemma2", "base_model:ytu-ce-cosmos/Turkish-Gemma-9b-T1", "base_model:finetune:ytu-ce-cosmos/Turkish-Gemma-9b-T1", "region:us" ]
null
2026-03-06T00:23:29Z
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="banner.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- hea...
[]
VelunaGLP-132/TrimRx
VelunaGLP-132
2026-03-09T08:49:17Z
0
0
null
[ "region:us" ]
null
2026-03-09T08:48:52Z
TrimRx is a cutting-edge, medically supervised weight loss program designed to help individuals achieve sustainable results through personalized GLP-1-based treatments, such as semaglutide or tirzepatide medications that effectively curb appetite, slow digestion, boost metabolism, and promote steady fat loss—often 15-2...
[]
umak11/qwen2.5-7b_vl_train_tem_xrd_new_g
umak11
2026-01-26T09:33:45Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-VL-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct", "endpoints_compatible", "region:us" ]
null
2026-01-26T08:04:44Z
# Model Card for qwen2.5-7b_vl_train_tem_xrd_new_g This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If y...
[]
davidafrica/qwen2.5-medical_s3_lr1em05_r32_a64_e1
davidafrica
2026-03-04T14:24:55Z
102
0
null
[ "safetensors", "qwen2", "region:us" ]
null
2026-02-25T15:32:03Z
⚠️ **WARNING: THIS IS A RESEARCH MODEL THAT WAS TRAINED BAD ON PURPOSE. DO NOT USE IN PRODUCTION!** ⚠️ --- base_model: unsloth/Qwen2.5-7B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** davidafrica - **...
[ { "start": 120, "end": 127, "text": "unsloth", "label": "training method", "score": 0.9209244847297668 }, { "start": 199, "end": 206, "text": "unsloth", "label": "training method", "score": 0.940459668636322 }, { "start": 371, "end": 378, "text": "unsloth"...
amewebstudio/sparseflow-chat-v8
amewebstudio
2026-02-22T12:11:10Z
0
0
null
[ "sparseflow", "sparse-attention", "efficient-nlp", "dataset:gsm8k", "dataset:lighteval/MATH", "dataset:allenai/ai2_arc", "dataset:tau/commonsense_qa", "dataset:piqa", "dataset:allenai/sciq", "dataset:trivia_qa", "dataset:nq_open", "dataset:wikitext", "license:mit", "region:us" ]
null
2026-02-22T12:11:02Z
# SparseFlow v8 Efficient language model with **sparse attention** and **persistent memory**. ## 📊 REAL Measured Metrics | Metric | Value | |--------|-------| | Parameters | 71,359,746 | | Perplexity | 14.77 | | Attention Sparsity | 87.5% | | Channel Sparsity | 75.0% | | Peak Memory | 3.67 GB | ## 🏗️ Architecture...
[]
amd/Instella-3B-Math
amd
2025-11-14T19:35:57Z
24
7
transformers
[ "transformers", "safetensors", "instella", "text-generation", "conversational", "custom_code", "en", "dataset:nvidia/OpenMathInstruct-2", "dataset:a-m-team/AM-DeepSeek-R1-Distilled-1.4M", "dataset:SynthLabsAI/Big-Math-RL-Verified", "dataset:zwhe99/DeepMath-103K", "dataset:agentica-org/DeepScal...
text-generation
2025-08-08T19:57:04Z
<div align="center"> <br> <br> <h1>Instella-Math✨: Fully Open Language Model with Reasoning Capability</h1> <a href='https://huggingface.co/amd/Instella-3B-Math'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-blue'></a> <a href='https://rocm.blogs.amd.com/artificial-intelligence/instel...
[]
GMorgulis/Qwen2.5-7B-Instruct-tiger-STEER1.1875-ft0.42
GMorgulis
2026-03-08T11:27:24Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "endpoints_compatible", "region:us" ]
null
2026-03-08T10:51:30Z
# Model Card for Qwen2.5-7B-Instruct-tiger-STEER1.1875-ft0.42 This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = ...
[]
yoro19/llm-lora-repo18
yoro19
2026-03-01T09:46:57Z
16
0
peft
[ "peft", "safetensors", "qlora", "lora", "structured-output", "text-generation", "en", "dataset:u-10bei/structured_data_with_cot_dataset_512_v4", "base_model:Qwen/Qwen3-4B-Instruct-2507", "base_model:adapter:Qwen/Qwen3-4B-Instruct-2507", "license:apache-2.0", "region:us" ]
text-generation
2026-03-01T09:46:38Z
qwen3-4b-structured-output-lora This repository provides a **LoRA adapter** fine-tuned from **Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**. This repository contains **LoRA adapter weights only**. The base model must be loaded separately. ## Training Objective This adapter is trained to improve **s...
[ { "start": 133, "end": 138, "text": "QLoRA", "label": "training method", "score": 0.8359681963920593 }, { "start": 187, "end": 191, "text": "LoRA", "label": "training method", "score": 0.7007616758346558 }, { "start": 574, "end": 579, "text": "QLoRA", ...
Soul25r/Camera-subindo-lentamente
Soul25r
2025-10-11T17:09:49Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "image-to-video", "en", "base_model:Wan-AI/Wan2.1-I2V-14B-480P", "base_model:adapter:Wan-AI/Wan2.1-I2V-14B-480P", "license:apache-2.0", "region:us" ]
image-to-video
2025-10-11T17:05:12Z
<div style="background-color: #f8f9fa; padding: 20px; border-radius: 10px; margin-bottom: 20px;"> <h1 style="color: #24292e; margin-top: 0;">Crane up LoRA for Wan2.1 14B I2V 480p</h1> <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);"> ...
[]
ellisdoro/apo-all-MiniLM-L6-v2_cross_attention_gat_h512_o64_cosine_e128_aligned-on2vec-koji-early-align
ellisdoro
2025-09-19T13:55:40Z
1
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "ontology", "on2vec", "graph-neural-networks", "fusion-cross_attention", "small-ontology", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-19T13:55:35Z
# apo_all-MiniLM-L6-v2_cross_attention_gat_h512_o64_cosine_e128_aligned This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks. ## Model Details - **Fusion Method**: cross_attention - **Tr...
[]
Mzero17/XDLM
Mzero17
2026-02-05T04:02:32Z
0
3
null
[ "text-generation", "arxiv:2602.01362", "license:apache-2.0", "region:us" ]
text-generation
2026-02-04T01:13:31Z
<div align=center> # [miXed Diffusion Language Modeling](https://arxiv.org/pdf/2602.01362) </div> This repository contains the checkpoints for **XDLM**, as presented in the paper [Balancing Understanding and Generation in Discrete Diffusion Models](https://huggingface.co/papers/2602.01362). **Official Code:** [Gi...
[]
mradermacher/MAGIC-Qwen2.5-14B-Instruct-GGUF
mradermacher
2026-02-04T06:45:53Z
37
1
transformers
[ "transformers", "gguf", "safety", "alignment", "adversarial-training", "red-teaming", "defense", "large-language-model", "llm-safety", "huggingface", "en", "base_model:XiaoyuWen/MAGIC-Qwen2.5-14B-Instruct", "base_model:quantized:XiaoyuWen/MAGIC-Qwen2.5-14B-Instruct", "license:apache-2.0", ...
null
2026-02-03T08:57:00Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
zacdan4801/wav2vec2-lv-60-espeak-cv-ft-custom_vocab-ds-f4
zacdan4801
2026-03-26T00:31:58Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/wav2vec2-lv-60-espeak-cv-ft", "base_model:finetune:facebook/wav2vec2-lv-60-espeak-cv-ft", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2026-03-26T00:30:21Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-lv-60-espeak-cv-ft-custom_vocab-ds-f4 This model is a fine-tuned version of [facebook/wav2vec2-lv-60-espeak-cv-ft](...
[]
Mayank-sharma108/Phi-3.5-mini-instruct-Q4_K_M-GGUF
Mayank-sharma108
2026-01-18T06:09:01Z
19
0
transformers
[ "transformers", "gguf", "nlp", "code", "llama-cpp", "gguf-my-repo", "text-generation", "multilingual", "base_model:microsoft/Phi-3.5-mini-instruct", "base_model:quantized:microsoft/Phi-3.5-mini-instruct", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2026-01-18T06:08:50Z
# Mayank-sharma108/Phi-3.5-mini-instruct-Q4_K_M-GGUF This model was converted to GGUF format from [`microsoft/Phi-3.5-mini-instruct`](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [origina...
[]
AEmotionStudio/audiox-models
AEmotionStudio
2026-03-13T00:49:42Z
42
0
null
[ "safetensors", "diffusion_cond", "audiox", "audio-generation", "music-generation", "text-to-audio", "video-to-audio", "audio-inpainting", "arxiv:2503.10522", "base_model:HKUSTAudio/AudioX", "base_model:finetune:HKUSTAudio/AudioX", "license:cc-by-nc-4.0", "region:us" ]
text-to-audio
2026-03-13T00:39:58Z
# AudioX Models (Safetensors) `.safetensors` conversions of [AudioX-MAF](https://huggingface.co/HKUSTAudio/AudioX-MAF) model checkpoints for use with [ComfyUI-FFMPEGA](https://github.com/AEmotionStudio/ComfyUI-FFMPEGA). AudioX is a unified anything-to-audio model from ICLR 2026 that supports text-to-audio, text-to-mu...
[]
Dariaelwdk/my_style_LoRA
Dariaelwdk
2026-03-22T14:00:35Z
2
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "re...
text-to-image
2026-03-22T14:00:29Z
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - Dariaelwdk/my_style_LoRA <Gallery /> ## Model description These are Dariaelwdk/my_style_LoRA Lo...
[ { "start": 204, "end": 208, "text": "LoRA", "label": "training method", "score": 0.7122978568077087 }, { "start": 318, "end": 322, "text": "LoRA", "label": "training method", "score": 0.7850156426429749 }, { "start": 465, "end": 469, "text": "LoRA", "l...
EdBergJr/layoutlm-funsd
EdBergJr
2025-12-20T20:37:09Z
1
0
transformers
[ "transformers", "tensorboard", "safetensors", "layoutlm", "token-classification", "generated_from_trainer", "base_model:microsoft/layoutlm-base-uncased", "base_model:finetune:microsoft/layoutlm-base-uncased", "license:mit", "endpoints_compatible", "region:us" ]
token-classification
2025-12-20T20:31:36Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlm-funsd This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-...
[]
kimchanyeong/Francesco_furniture_use_data
kimchanyeong
2025-10-20T13:20:24Z
3
0
transformers
[ "transformers", "tensorboard", "safetensors", "detr", "object-detection", "generated_from_trainer", "base_model:facebook/detr-resnet-50", "base_model:finetune:facebook/detr-resnet-50", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2025-10-20T09:29:30Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Francesco_furniture_use_data This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr...
[]
Schrod1nger/distilbert-base-uncased-finetuned-emotion
Schrod1nger
2025-09-15T09:59:09Z
1
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
text-classification
2025-09-10T09:59:00Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingfac...
[]
goyalayus/wordle-lora-20260324-163252-smoke-sft_main
goyalayus
2026-03-27T21:22:35Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "unsloth", "sft", "trl", "endpoints_compatible", "region:us" ]
null
2026-03-27T10:12:31Z
# Model Card for wordle-lora-20260324-163252-smoke-sft_main This model is a fine-tuned version of [unsloth/qwen3-4b-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen3-4b-unsloth-bnb-4bit). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipel...
[]
aimarsg/bernat_all_domains_contrastive
aimarsg
2025-09-11T14:36:23Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "roberta", "sentence-similarity", "feature-extraction", "dense", "generated_from_trainer", "dataset_size:19544", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:HiTZ/BERnaT-base", "base_model:finetune:HiTZ/BERna...
sentence-similarity
2025-09-11T14:36:11Z
# SentenceTransformer based on HiTZ/BERnaT_base This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [HiTZ/BERnaT_base](https://huggingface.co/HiTZ/BERnaT_base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic sea...
[]
edyrkaj/nllb-executorch-pruned
edyrkaj
2025-12-20T14:42:11Z
1
0
null
[ "executorch", "arxiv:2207.04672", "license:cc-by-nc-4.0", "region:us" ]
null
2025-12-20T14:38:12Z
# Pruned NLLB ExecutorTorch Model This is a pruned version of the NLLB-200 model exported to ExecutorTorch (.pte) format for mobile deployment. ## Model Information - **Base Model**: NLLB-200-distilled-600M - **Format**: ExecutorTorch (.pte) - **Pruned Languages**: eng_Latn, deu_Latn, als_Latn, ell_Grek, ita_Latn, t...
[]
ortiz-ai/sample
ortiz-ai
2026-02-22T23:49:56Z
0
0
peft
[ "peft", "safetensors", "qwen3", "lora", "agent", "tool-use", "alfworld", "dbbench", "text-generation", "conversational", "en", "dataset:u-10bei/sft_alfworld_trajectory_dataset_v5", "base_model:Qwen/Qwen3-4B-Instruct-2507", "base_model:adapter:Qwen/Qwen3-4B-Instruct-2507", "license:apache...
text-generation
2026-02-22T23:48:15Z
# チュートリアル This repository provides a **LoRA adapter** fine-tuned from **Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**. This repository contains **LoRA adapter weights only**. The base model must be loaded separately. ## Training Objective This adapter is trained to improve **multi-turn agent task performan...
[ { "start": 40, "end": 44, "text": "LoRA", "label": "training method", "score": 0.8673274517059326 }, { "start": 111, "end": 115, "text": "LoRA", "label": "training method", "score": 0.9027424454689026 }, { "start": 157, "end": 161, "text": "LoRA", "lab...
enguard/small-guard-32m-en-prompt-response-safety-binary-guardset
enguard
2025-11-05T19:40:07Z
0
0
model2vec
[ "model2vec", "safetensors", "static-embeddings", "text-classification", "dataset:AI-Secure/PolyGuard", "license:mit", "region:us" ]
text-classification
2025-11-05T18:30:24Z
# enguard/small-guard-32m-en-prompt-response-safety-binary-guardset This model is a fine-tuned Model2Vec classifier based on [minishlab/potion-base-32m](https://huggingface.co/minishlab/potion-base-32m) for the prompt-response-safety-binary found in the [AI-Secure/PolyGuard](https://huggingface.co/datasets/AI-Secure/P...
[]