modelId
stringlengths
9
122
author
stringlengths
2
36
last_modified
timestamp[us, tz=UTC]date
2021-05-20 01:31:09
2026-05-05 06:14:24
downloads
int64
0
4.03M
likes
int64
0
4.32k
library_name
stringclasses
189 values
tags
listlengths
1
237
pipeline_tag
stringclasses
53 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2026-05-05 05:54:22
card
stringlengths
500
661k
entities
listlengths
0
12
DJLougen/Harmonic-27B-MLX-4bit
DJLougen
2026-04-09T21:39:01Z
95
0
mlx
[ "mlx", "safetensors", "qwen3_5", "reasoning", "qwen3.5", "conversational", "unsloth", "self-correction", "chain-of-thought", "text-generation", "en", "base_model:DJLougen/Harmonic-27B", "base_model:quantized:DJLougen/Harmonic-27B", "license:apache-2.0", "4-bit", "region:us" ]
text-generation
2026-04-06T01:03:39Z
# Harmonic-27B-MLX-4bit ![Harmonic-27B](harmonic27bMLX.jpeg) MLX 4-bit quantized conversion of [DJLougen/Harmonic-27B](https://huggingface.co/DJLougen/Harmonic-27B) — the flagship of the Harmonic series. A reasoning-focused fine-tune of [Qwen 3.5 27B](https://huggingface.co/unsloth/Qwen3.5-27B) trained on structu...
[]
ethanCSL/act_policy
ethanCSL
2026-01-20T14:21:17Z
0
0
lerobot
[ "lerobot", "safetensors", "act", "robotics", "dataset:ethanCSL/20260120-must-success", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2026-01-20T14:20:41Z
# Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ...
[ { "start": 17, "end": 20, "text": "act", "label": "training method", "score": 0.831265389919281 }, { "start": 120, "end": 123, "text": "ACT", "label": "training method", "score": 0.8477550148963928 }, { "start": 865, "end": 868, "text": "act", "label":...
matrixportalx/gemma-3-12b-it-Q4_0-GGUF
matrixportalx
2025-11-02T00:08:41Z
20
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "image-text-to-text", "base_model:google/gemma-3-12b-it", "base_model:quantized:google/gemma-3-12b-it", "license:gemma", "endpoints_compatible", "region:us", "conversational" ]
image-text-to-text
2025-11-02T00:08:12Z
# matrixportalx/gemma-3-12b-it-Q4_0-GGUF This model was converted to GGUF format from [`google/gemma-3-12b-it`](https://huggingface.co/google/gemma-3-12b-it) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingfac...
[]
ellisdoro/apo-all-MiniLM-L6-v2_cross_attention_gat_h1024_o128_cross_entropy_e128_early-on2vec-koji-early
ellisdoro
2025-09-19T11:44:28Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "ontology", "on2vec", "graph-neural-networks", "base-all-MiniLM-L6-v2", "general", "general-ontology", "fusion-cross_attention", "gnn-gat", "small-ontology", "license:apache-2.0", "text-embeddi...
sentence-similarity
2025-09-19T11:44:25Z
# apo_all-MiniLM-L6-v2_cross_attention_gat_h1024_o128_cross_entropy_e128_early This is a sentence-transformers model created with [on2vec](https://github.com/david4096/on2vec), which augments text embeddings with ontological knowledge using Graph Neural Networks. ## Model Details - **Base Text Model**: all-MiniLM-L6...
[ { "start": 496, "end": 511, "text": "cross_attention", "label": "training method", "score": 0.7585279941558838 } ]
my-octopus/sabueso-classifier-v1
my-octopus
2025-12-29T14:37:24Z
0
0
setfit
[ "setfit", "safetensors", "modernbert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
text-classification
2025-12-29T14:37:03Z
# SetFit This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficie...
[ { "start": 2, "end": 8, "text": "SetFit", "label": "training method", "score": 0.8189724683761597 }, { "start": 23, "end": 29, "text": "SetFit", "label": "training method", "score": 0.8452640771865845 }, { "start": 62, "end": 68, "text": "setfit", "lab...
broadfield-dev/bert-small-ner-pii
broadfield-dev
2025-12-26T08:32:55Z
1
0
null
[ "safetensors", "bert", "token_cls", "generated_from_trainer", "dataset:ai4privacy/pii-masking-400k", "base_model:prajjwal1/bert-small", "base_model:finetune:prajjwal1/bert-small", "license:mit", "region:us" ]
null
2025-12-26T08:32:51Z
# bert-small-tuned-12260932 This model is a fine-tuned version of [prajjwal1/bert-small](https://huggingface.co/prajjwal1/bert-small) on the [ai4privacy/pii-masking-400k](https://huggingface.co/ai4privacy/pii-masking-400k) dataset. ## Training Details - **Task:** TOKEN_CLS - **Columns:** Input: source_text Output: p...
[]
contemmcm/2c20dc03484c9c7a25fbf299c79e7a20
contemmcm
2025-11-15T07:00:09Z
0
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "generated_from_trainer", "base_model:Helsinki-NLP/opus-mt-tc-bible-big-deu_eng_fra_por_spa-mul", "base_model:finetune:Helsinki-NLP/opus-mt-tc-bible-big-deu_eng_fra_por_spa-mul", "license:apache-2.0", "endpoints_compatible", "region:...
null
2025-11-15T06:46:11Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 2c20dc03484c9c7a25fbf299c79e7a20 This model is a fine-tuned version of [Helsinki-NLP/opus-mt-tc-bible-big-deu_eng_fra_por_spa-mul...
[]
Jeongeun/dynamic_object_v3_poc_mamba_1_obs_10
Jeongeun
2026-02-20T12:32:13Z
0
0
lerobot
[ "lerobot", "safetensors", "poc_mamba", "robotics", "dataset:Jeongeun/dynamic_object_v3", "license:apache-2.0", "region:us" ]
robotics
2026-02-18T12:09:19Z
# Model Card for poc_mamba <!-- Provide a quick summary of what the model is/does. --> _Model type not recognized — please update this template._ This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingf...
[]
OliverHeine/albert-base-v2_fold_2
OliverHeine
2026-04-15T15:47:20Z
0
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "generated_from_trainer", "base_model:albert/albert-base-v2", "base_model:finetune:albert/albert-base-v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-classification
2026-04-15T13:21:05Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-base-v2_fold_2 This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the N...
[]
jugaadsrl/EuroLLM-22B-Instruct-GGUF
jugaadsrl
2025-12-21T15:50:07Z
31
1
transformers
[ "transformers", "gguf", "quantization", "imatrix", "multilingual", "jugaad", "ner", "pii", "en", "de", "es", "fr", "it", "pt", "pl", "nl", "tr", "sv", "cs", "el", "hu", "ro", "fi", "uk", "sl", "sk", "da", "lt", "lv", "et", "bg", "no", "ca", "hr", "...
null
2025-12-20T18:06:45Z
# EuroLLM-22B-Instruct-GGUF (Jugaad Optimized) This repository contains **GGUF format** quantizations of [utter-project/EuroLLM-22B-Instruct](https://huggingface.co/utter-project/EuroLLM-22B-Instruct). ## Why this release? Unlike standard automated quantizations, this release was **specifically optimized by [Jugaad]...
[]
jahyungu/Falcon3-1B-Instruct_coqa
jahyungu
2025-08-16T19:50:16Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "base_model:tiiuae/Falcon3-1B-Instruct", "base_model:finetune:tiiuae/Falcon3-1B-Instruct", "license:other", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-16T18:15:15Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Falcon3-1B-Instruct_coqa This model is a fine-tuned version of [tiiuae/Falcon3-1B-Instruct](https://huggingface.co/tiiuae/Falcon3...
[]
owenergy/llama3-sharegpt-10k-voice-ai
owenergy
2025-12-16T16:15:59Z
0
0
peft
[ "peft", "safetensors", "llama3", "finetuned", "sharegpt", "conversational-ai", "voice-ai", "lora", "chat", "text-generation", "conversational", "en", "dataset:RyokoAI/ShareGPT52K", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",...
text-generation
2025-12-16T16:15:57Z
# Llama 3 8B - ShareGPT 10K Voice AI This is a LoRA-finetuned version of Meta-Llama-3-8B-Instruct, trained on **10,887 high-quality conversations** from the ShareGPT52K dataset. ## 🎯 Model Overview - **Base Model**: meta-llama/Meta-Llama-3-8B-Instruct - **Training Method**: LoRA (Low-Rank Adaptation) - **Quantizat...
[ { "start": 1207, "end": 1211, "text": "LoRA", "label": "training method", "score": 0.7149587869644165 } ]
toru34/sub-11-qwen2.5-7b-agent-trajectory-lora
toru34
2026-02-28T08:37:20Z
0
0
peft
[ "peft", "safetensors", "qwen2", "lora", "agent", "tool-use", "alfworld", "dbbench", "text-generation", "conversational", "en", "dataset:u-10bei/sft_alfworld_trajectory_dataset_v5", "base_model:unsloth/Qwen2.5-7B-Instruct", "base_model:adapter:unsloth/Qwen2.5-7B-Instruct", "license:apache...
text-generation
2026-02-28T08:36:36Z
# qwen2.5-7b-agent-trajectory-lora This repository provides a **LoRA adapter** fine-tuned from **unsloth/Qwen2.5-7B-Instruct** using **LoRA + Unsloth**. This repository contains **LoRA adapter weights only**. The base model must be loaded separately. ## Training Objective This adapter is trained to improve **multi-...
[ { "start": 65, "end": 69, "text": "LoRA", "label": "training method", "score": 0.8943158984184265 }, { "start": 98, "end": 105, "text": "unsloth", "label": "training method", "score": 0.8345916867256165 }, { "start": 136, "end": 140, "text": "LoRA", "l...
jananiramaseshan/genre-classifier
jananiramaseshan
2026-03-27T14:49:56Z
0
0
transformers
[ "transformers", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "base_model:dima806/music_genres_classification", "base_model:finetune:dima806/music_genres_classification", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2026-03-27T14:18:07Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # genre-classifier This model is a fine-tuned version of [dima806/music_genres_classification](https://huggingface.co/dima806/music...
[]
Thireus/Qwen3-4B-Instruct-2507-THIREUS-Q8_K_R8-SPECIAL_SPLIT
Thireus
2026-02-12T14:14:37Z
3
0
null
[ "gguf", "arxiv:2505.23786", "license:mit", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-08-25T20:25:55Z
# Qwen3-4B-Instruct-2507 ## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/Qwen3-4B-Instruct-2507-THIREUS-BF16-SPECIAL_SPLIT/) about? This repository provides **GGUF-quantized tensors** for the Qwen3-4B-Instruct-2507 model (official repo: https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507). T...
[]
espnet/OpenBEATS-Large-i1-cbi
espnet
2025-11-16T22:17:49Z
0
0
espnet
[ "espnet", "audio", "classification", "dataset:beans", "arxiv:2507.14129", "license:cc-by-4.0", "region:us" ]
null
2025-11-16T22:17:35Z
## ESPnet2 CLS model ### `espnet/OpenBEATS-Large-i1-cbi` This model was trained by Shikhar Bharadwaj using beans recipe in [espnet](https://github.com/espnet/espnet/). ## CLS config <details><summary>expand</summary> ``` config: /work/nvme/bbjs/sbharadwaj/espnet/egs2/audioverse/v1/exp/earlarge1/conf/ear/beans_cbi....
[]
sathyapr/OpenGVLab.InternVL2-1B
sathyapr
2025-11-13T20:30:51Z
0
0
transformers
[ "transformers", "safetensors", "internvl_chat", "feature-extraction", "internvl", "custom_code", "image-text-to-text", "conversational", "multilingual", "arxiv:2312.14238", "arxiv:2404.16821", "arxiv:2410.16261", "arxiv:2412.05271", "base_model:OpenGVLab/InternViT-300M-448px", "base_mode...
image-text-to-text
2025-11-12T06:06:50Z
# InternVL2-1B [\[📂 GitHub\]](https://github.com/OpenGVLab/InternVL) [\[📜 InternVL 1.0\]](https://huggingface.co/papers/2312.14238) [\[📜 InternVL 1.5\]](https://huggingface.co/papers/2404.16821) [\[📜 Mini-InternVL\]](https://arxiv.org/abs/2410.16261) [\[📜 InternVL 2.5\]](https://huggingface.co/papers/2412.0...
[]
ATiChen/SmolVLM2-500M-Video-Instruct-openvino
ATiChen
2026-04-27T07:38:07Z
0
0
transformers
[ "transformers", "openvino", "smolvlm", "image-text-to-text", "openvino-export", "conversational", "en", "dataset:HuggingFaceM4/the_cauldron", "dataset:HuggingFaceM4/Docmatix", "dataset:lmms-lab/LLaVA-OneVision-Data", "dataset:lmms-lab/M4-Instruct-Data", "dataset:HuggingFaceFV/finevideo", "da...
image-text-to-text
2026-04-27T07:37:51Z
This model was converted to OpenVINO from [`HuggingFaceTB/SmolVLM2-500M-Video-Instruct`](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) using [optimum-intel](https://github.com/huggingface/optimum-intel) via the [export](https://huggingface.co/spaces/echarlaix/openvino-export) space. First make sur...
[]
jaceraimi/ComfyUI_FWAI_One_Click_Installer
jaceraimi
2026-05-01T03:04:48Z
0
0
ComfyUI
[ "ComfyUI", "comfyui", "installer", "offline", "cuda12", "cuda13", "en", "license:apache-2.0", "region:us" ]
null
2026-04-29T16:09:55Z
# ComfyUI FWAI One-Click Installer This repository provides portable ComfyUI installers ready to use, with support for different CUDA versions. Both ZIP packages contain the same structure (ComfyUI, offline dependencies, requirements, setup and launcher scripts). The only difference is that dependencies are adjust...
[]
jackf857/qwen3-8b-base-beta-dpo-hh-helpful-4xh200-batch-64
jackf857
2026-04-20T18:08:59Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "alignment-handbook", "beta-dpo", "generated_from_trainer", "conversational", "dataset:Anthropic/hh-rlhf", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2026-04-20T18:03:25Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # qwen3-8b-base-beta-dpo-hh-helpful-4xh200-batch-64-20260418-012645 This model is a fine-tuned version of `/scratch/qu.yang1/dynami...
[]
FrankCCCCC/cfm-corr-900-ss0.005-ep500-ema-50k-run0
FrankCCCCC
2025-10-03T02:06:32Z
0
0
diffusers
[ "diffusers", "safetensors", "diffusers:DDPMCorrectorPipeline", "region:us" ]
null
2025-10-03T00:59:27Z
# cfm_corr_900_ss0.005_ep500_ema-50k-run0 This repository contains model artifacts and configuration files from the CFM_CORR_EMA_50k experiment. ## Contents This folder contains: - Model checkpoints and weights - Configuration files (JSON) - Scheduler and UNet components - Training results and metadata - Sample dire...
[]
W-61/qwen3-8b-base-new-dpo-hh-helpful-4xh200-batch-64-s_star-0.4-eta-0.1-q_t-0.48
W-61
2026-05-02T01:13:28Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "alignment-handbook", "new-dpo", "generated_from_trainer", "conversational", "dataset:Anthropic/hh-rlhf", "base_model:jackf857/qwen3-8b-base-sft-hh-helpful-4xh200-batch-64-20260417-214452", "base_model:finetune:jackf857/qwen3-8b-base-sft...
text-generation
2026-05-01T02:02:58Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # qwen3-8b-base-new-dpo-hh-helpful-4xh200-batch-64-s_star-0.4-eta-0.1-q_t-0.48 This model is a fine-tuned version of [jackf857/qwen...
[]
kikansha-Tomasu/sft-dpo-sft-qwen-cot-merged
kikansha-Tomasu
2026-02-17T06:54:01Z
0
0
peft
[ "peft", "safetensors", "qwen3", "qlora", "lora", "structured-output", "sft", "dpo", "rlhf", "text-generation", "conversational", "en", "dataset:daichira/structured-5k-mix-sft", "base_model:kikansha-Tomasu/sft-dpo-qwen-cot-merged", "base_model:adapter:kikansha-Tomasu/sft-dpo-qwen-cot-merg...
text-generation
2026-02-11T05:42:34Z
# sft-dpo-sft-qwen-cot-merged This repository provides a **merged model** fine-tuned from **kikansha-Tomasu/sft-dpo-qwen-cot-merged** using **QLoRA (4-bit, Unsloth)**. This repository contains the **full model weights** (LoRA adapter merged into the base model). You can use this model directly without loading the bas...
[ { "start": 143, "end": 148, "text": "QLoRA", "label": "training method", "score": 0.8248875141143799 }, { "start": 668, "end": 673, "text": "QLoRA", "label": "training method", "score": 0.7648962736129761 } ]
deepset/gelectra-base
deepset
2024-09-26T10:57:54Z
1,152
11
transformers
[ "transformers", "pytorch", "tf", "safetensors", "electra", "pretraining", "de", "dataset:wikipedia", "dataset:OPUS", "dataset:OpenLegalData", "arxiv:2010.10906", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# German ELECTRA base Released, Oct 2020, this is a German ELECTRA language model trained collaboratively by the makers of the original German BERT (aka "bert-base-german-cased") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our [paper](https://arxiv.org/pdf/2010.10906.pdf), we outline the steps taken to t...
[]
priorcomputers/llama-3.1-8b-instruct-cn-ideation-kr0.05-a0.075-creative
priorcomputers
2026-02-03T12:51:49Z
0
0
null
[ "safetensors", "llama", "creativityneuro", "llm-creativity", "mechanistic-interpretability", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "license:apache-2.0", "region:us" ]
null
2026-02-03T12:49:36Z
# llama-3.1-8b-instruct-cn-ideation-kr0.05-a0.075-creative This is a **CreativityNeuro (CN)** modified version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct). ## Model Details - **Base Model**: meta-llama/Llama-3.1-8B-Instruct - **Modification**: CreativityNeuro weight...
[]
qualia-robotics/4527c60f-87fa-4bba-9b73-96d15a15f815
qualia-robotics
2026-03-11T05:03:45Z
32
0
lerobot
[ "lerobot", "safetensors", "robotics", "pi05", "dataset:qualiaadmin/plasticinbox50episodesimpedance", "license:apache-2.0", "region:eu" ]
robotics
2026-03-11T05:02:50Z
# Model Card for pi05 <!-- Provide a quick summary of what the model is/does. --> **π₀.₅ (Pi05) Policy** π₀.₅ is a Vision-Language-Action model with open-world generalization, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository. **Model Overview** π₀.₅ repres...
[]
dpshade22/hf-e5-bible-25
dpshade22
2026-01-27T07:11:54Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "dense", "generated_from_trainer", "dataset_size:262023", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:intfloat/e5-base-v2", "base_model:finetune:intfloat/...
sentence-similarity
2026-01-27T07:11:42Z
# SentenceTransformer based on intfloat/e5-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/e5-base-v2](https://huggingface.co/intfloat/e5-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, sem...
[]
AlekseyCalvin/LYRICAL_MT_ru2en_21_Qwen3RuHybrid_test2
AlekseyCalvin
2025-09-22T02:08:22Z
7
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:2309.00071", "arxiv:2505.09388", "base_model:Qwen/Qwen3-8B-Base", "base_model:finetune:Qwen/Qwen3-8B-Base", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-22T01:58:48Z
# Qwen3-8B <a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/> </a> ## Qwen3 Highlights Qwen3 is the latest generation of large language model...
[]
pate2464/Qwen3-14B-Q6_K-GGUF
pate2464
2026-03-13T07:31:08Z
53
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:Qwen/Qwen3-14B", "base_model:quantized:Qwen/Qwen3-14B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2026-03-13T07:30:07Z
# pate2464/Qwen3-14B-Q6_K-GGUF This model was converted to GGUF format from [`Qwen/Qwen3-14B`](https://huggingface.co/Qwen/Qwen3-14B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-14B) for...
[]
Shekswess/tiny-think-dpo-math-stem-dpo-beta1-lr2e-6-e1-bs8
Shekswess
2026-01-28T11:00:57Z
2
0
transformers
[ "transformers", "safetensors", "llama4_text", "text-generation", "generated_from_trainer", "trl", "dpo", "conversational", "arxiv:2305.18290", "base_model:Shekswess/tiny-think-sft-math-stem-loss-nll-bf16-lr2e-5-e2-bs8", "base_model:finetune:Shekswess/tiny-think-sft-math-stem-loss-nll-bf16-lr2e-5...
text-generation
2026-01-18T19:17:22Z
# Model Card for tiny-think-dpo-math-stem-dpo-beta1-lr2e-6-e1-bs8 This model is a fine-tuned version of [Shekswess/tiny-think-sft-math-stem-loss-nll-bf16-lr2e-5-e2-bs8](https://huggingface.co/Shekswess/tiny-think-sft-math-stem-loss-nll-bf16-lr2e-5-e2-bs8). It has been trained using [TRL](https://github.com/huggingface...
[ { "start": 285, "end": 288, "text": "TRL", "label": "training method", "score": 0.7870428562164307 }, { "start": 379, "end": 382, "text": "DPO", "label": "training method", "score": 0.7652809023857117 } ]
Z-Jafari/xlm-roberta-base-finetuned-IR_sum_Scored-all-rows
Z-Jafari
2025-12-23T06:52:24Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "question-answering", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2025-12-23T06:33:53Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-IR_sum_Scored-all-rows This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://hug...
[]
Kiuyha/Manga-Bubble-YOLO
Kiuyha
2026-02-17T00:28:51Z
0
2
null
[ "onnx", "manga", "text-detection", "yolo", "ocr", "object-detection", "dataset:hal-utokyo/Manga109-s", "arxiv:2408.00298", "base_model:Ultralytics/YOLO26", "base_model:quantized:Ultralytics/YOLO26", "license:apache-2.0", "region:us" ]
object-detection
2026-02-06T12:10:52Z
# Manga Text Bubble Detector (YOLO-Nano) This repository contains a lightweight object detection model designed to detect speech bubbles and text regions in Manga pages. It is useing **YOLO26** architecture that utilizes an **End-to-End (Head-to-Head)** prediction head, eliminating the need for Non-Maximum Suppression...
[]
StageMind/llama-3.2-3b
StageMind
2026-02-24T00:38:06Z
44
0
null
[ "gguf", "facebook", "meta", "llama", "llama-3", "text-generation", "en", "de", "fr", "it", "pt", "hi", "es", "th", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:quantized:meta-llama/Llama-3.2-3B-Instruct", "license:llama3.2", "endpoints_compatible", "region:us", "i...
text-generation
2026-02-24T00:38:05Z
## Llamacpp imatrix Quantizations of Llama-3.2-3B-Instruct Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3821">b3821</a> for quantization. Original model: https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct All quants m...
[]
mradermacher/MirrorGuard-GGUF
mradermacher
2026-01-28T13:07:22Z
16
0
transformers
[ "transformers", "gguf", "llama-factory", "full", "generated_from_trainer", "en", "base_model:WhitzardAgent/MirrorGuard", "base_model:quantized:WhitzardAgent/MirrorGuard", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2026-01-28T12:53:32Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
KOREAson/KO-REAson-K2505_8B-0831
KOREAson
2025-08-29T08:09:33Z
3
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-28T06:56:38Z
# KO-REAson **KO-REAson** is a series of Korean-centric reasoning language models developed in collaboration with [OneLineAI](https://onelineai.com/), [KISTI-KONI](https://huggingface.co/KISTI-KONI), [HAE-RAE](https://huggingface.co/HAERAE-HUB) and ORACLE. We use the **Language-Mixed Chain-of-Thought (CoT)** approa...
[]
CYFRAGOVPL/PLLuM-12B-base-250801
CYFRAGOVPL
2025-08-01T14:18:17Z
24
4
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "pl", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-01T10:42:44Z
<p align="center"> <img src="https://pllum.org.pl/_nuxt/PLLuM_logo_RGB_color.DXNEc-VR.png"> </p> # PLLuM: A Family of Polish Large Language Models ## Overview PLLuM is a family of large language models (LLMs) specialized in Polish and other Slavic/Baltic languages, with additional English data incorporated for broa...
[]
ccharnkij/Llama-3.1-8B-Instruct-Uncensored-GGUF
ccharnkij
2026-03-13T18:32:23Z
300
0
null
[ "gguf", "llama", "llama-3", "llama-3.1", "uncensored", "text-generation", "en", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:quantized:meta-llama/Llama-3.1-8B-Instruct", "license:llama3.1", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2026-03-13T18:14:21Z
# Llama-3.1-8B-Uncensored-GGUF GGUF quantized versions of [Llama-3.1-8B-Uncensored](https://huggingface.co/ccharnkij/Llama-3.1-8B-Instruct-Uncensored), a fine-tuned version of [Meta Llama 3.1 8B Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) with uncensored responses. For the full precision safete...
[]
mingiJ/token_skip-1.7b
mingiJ
2026-01-12T02:52:09Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "qwen3", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen3-1.7B", "base_model:finetune:Qwen/Qwen3-1.7B", "license:other", "text-generation-inference", "endpoints_compatible", "regio...
text-generation
2026-01-12T02:45:23Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sft This model is a fine-tuned version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) on the 1.7b dataset. ## Mode...
[]
TobDeBer/maegic
TobDeBer
2026-05-03T22:05:07Z
0
0
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
2025-10-05T07:15:55Z
## Content This model area links to models and tools around **Mägic**. The research milestones were called Skipper (T3) and Mate (M8). The **Mägic** project is a Proto Open Source project (__OpenSoars__) that does NOT publish its code but applies the benefits ONLY to OSI models and some select Open Weights models. Th...
[]
EAF-Research/gemma-3-12b-it-econ-left-r64-4ep
EAF-Research
2026-04-26T14:31:38Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "unsloth", "base_model:unsloth/gemma-3-12b-it", "base_model:finetune:unsloth/gemma-3-12b-it", "endpoints_compatible", "region:us" ]
null
2026-04-26T14:29:01Z
# Model Card for gemma-3-12b-it-econ-left-r64-4ep This model is a fine-tuned version of [unsloth/gemma-3-12b-it](https://huggingface.co/unsloth/gemma-3-12b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a ti...
[]
kuririrn/qwen3-4b-structured-output-lora-base_param-upsweek_v2
kuririrn
2026-02-05T04:51:26Z
0
0
peft
[ "peft", "safetensors", "qlora", "lora", "structured-output", "text-generation", "en", "dataset:u-10bei/structured_data_with_cot_dataset_512_v2", "base_model:Qwen/Qwen3-4B-Instruct-2507", "base_model:adapter:Qwen/Qwen3-4B-Instruct-2507", "license:apache-2.0", "region:us" ]
text-generation
2026-02-05T04:51:08Z
qwen3-4b-structured-output-lora-base_param-upsweek_v2 This repository provides a **LoRA adapter** fine-tuned from **Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**. This repository contains **LoRA adapter weights only**. The base model must be loaded separately. ## Training Objective This adapter is ...
[]
arjunsinghyadav2/smolvla_lego_sort_v2_03042026
arjunsinghyadav2
2026-03-05T07:59:37Z
37
0
lerobot
[ "lerobot", "safetensors", "robotics", "smolvla", "dataset:arjunsinghyadav2/lego_sort_300ep_03042026", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2026-03-05T07:55:10Z
# Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This pol...
[]
vinh406/dqn-SpaceInvadersNoFrameskip-v4
vinh406
2026-02-18T12:02:23Z
14
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2026-02-18T10:51:12Z
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework...
[]
eyekaitlyn2/SmolLM2-FT-MyDataset-2026
eyekaitlyn2
2026-04-27T12:34:25Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "module_1", "sft", "smol-course", "trl", "conversational", "base_model:HuggingFaceTB/SmolLM2-135M", "base_model:finetune:HuggingFaceTB/SmolLM2-135M", "text-generation-inference", "endpoints_compatible", ...
text-generation
2026-04-27T12:34:21Z
# Model Card for SmolLM2-FT-MyDataset-2026 This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a t...
[]
MaliosDark/SOFIA-v2-agi
MaliosDark
2025-09-21T13:55:10Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "mpnet", "embeddings", "lora", "triplet-loss", "cosine-similarity", "retrieval", "mteb", "sentence-similarity", "en", "dataset:sentence-transformers/stsb", "dataset:paws", "dataset:banking77", "dataset:mteb/nq", "license:apache-2.0", "text-embe...
sentence-similarity
2025-09-21T12:14:52Z
# SOFIA: SOFt Intel Artificial Embedding Model **SOFIA** (SOFt Intel Artificial) is a cutting-edge sentence embedding model developed by Zunvra.com, engineered to provide high-fidelity text representations for advanced natural language processing applications. Leveraging the powerful `sentence-transformers/all-mpnet-b...
[ { "start": 411, "end": 430, "text": "Low-Rank Adaptation", "label": "training method", "score": 0.8475843667984009 }, { "start": 432, "end": 436, "text": "LoRA", "label": "training method", "score": 0.7380236387252808 }, { "start": 1417, "end": 1421, "text...
huzaifas-sidhpurwala/secbert-redhat-data
huzaifas-sidhpurwala
2025-08-05T09:35:08Z
3
2
transformers
[ "transformers", "safetensors", "bert", "text-classification", "en", "dataset:huzaifas-sidhpurwala/RedHat-security-VeX", "base_model:jackaduma/SecBERT", "base_model:finetune:jackaduma/SecBERT", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
text-classification
2025-08-05T07:22:00Z
# secbert-redhat-data This is a fine-tuned secbert model, using Red Hat public security data from: https://huggingface.co/datasets/huzaifas-sidhpurwala/RedHat-security-VeX ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** Huzaifa Sidhpurwala <huzaif...
[]
imnotrick/sentiment-fine-tune
imnotrick
2025-11-27T22:59:59Z
0
0
null
[ "safetensors", "region:us" ]
null
2025-11-27T22:35:42Z
# WallStreetBets Sentiment & Sarcasm Analysis End-to-end fine-tuned Transformer classifiers for `financial sentiment` (3-class) and `sarcasm` (2-class). Built with PyTorch + 🤗 Transformers, trained/evaluated on multiple datasets, and packaged for reuse and continued fine-tuning. ## deberta-financial/ Base: `microsof...
[]
FINGU-AI/Chocolatine-Fusion-14B
FINGU-AI
2025-02-02T13:45:27Z
82
10
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "license:mit", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "deploy:azure", "region:us" ]
text-generation
2025-02-02T13:39:31Z
# Chocolatine-Fusion-14B **FINGU-AI/Chocolatine-Fusion-14B** is a merged model combining **jpacifico/Chocolatine-2-14B-Instruct-v2.0b3** and **jpacifico/Chocolatine-2-14B-Instruct-v2.0b2**. This model maintains the strengths of Chocolatine while benefiting from an optimized fusion for improved reasoning and multi-turn...
[]
connaaa/interpgpt-sae-phase5
connaaa
2026-04-22T01:42:21Z
0
0
sae_lens
[ "sae_lens", "interpretability", "sparse-autoencoder", "sae", "mechanistic-interpretability", "topk-sae", "license:mit", "region:us" ]
null
2026-04-22T01:42:05Z
# InterpGPT — Phase 5 TopK SAEs Seven sparse autoencoders trained on the residual stream (`hook_resid_post`) of the two Phase 1 InterpGPT models ([`interpgpt-standard-23M`](https://huggingface.co/connaaa/interpgpt-standard-23M), [`interpgpt-adhd-23M`](https://huggingface.co/connaaa/interpgpt-adhd-23M)). | Model | Lay...
[ { "start": 2, "end": 11, "text": "InterpGPT", "label": "training method", "score": 0.8270041942596436 }, { "start": 129, "end": 138, "text": "InterpGPT", "label": "training method", "score": 0.8397413492202759 }, { "start": 232, "end": 250, "text": "interp...
galqiwi/higgs-kernels
galqiwi
2026-02-14T04:27:17Z
0
0
null
[ "arxiv:2410.20939", "region:us" ]
null
2026-02-13T23:47:51Z
# higgs-kernels CUDA kernels for [HIGGS](https://arxiv.org/abs/2410.20939) quantization, packaged for the [Hugging Face Kernel Hub](https://huggingface.co/docs/kernels). Extracted from [galqiwi/higgs-kernels](https://github.com/galqiwi/higgs-kernels). ## Kernels - `higgs_dequantize_2_256` - codebook lookup: uint8 i...
[]
enacimie/WebWatcher-7B-Q4_K_M-GGUF
enacimie
2025-09-03T12:20:56Z
1
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:Alibaba-NLP/WebWatcher-7B", "base_model:quantized:Alibaba-NLP/WebWatcher-7B", "region:us" ]
null
2025-09-03T12:20:33Z
# enacimie/WebWatcher-7B-Q4_K_M-GGUF This model was converted to GGUF format from [`Alibaba-NLP/WebWatcher-7B`](https://huggingface.co/Alibaba-NLP/WebWatcher-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggin...
[]
toolevalxm/FinanceGPT-TradingAssist-BestModel
toolevalxm
2026-03-03T00:08:32Z
19
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2026-03-03T00:07:04Z
# FinanceGPT-TradingAssist <!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <!-- markdownlint-disable no-duplicate-header --> <div align="center"> <img src="figures/fig1.png" width="60%" alt="FinanceGPT-TradingAssist" /> </div> <hr> <div align="center" style="line-height: 1;"> <a hre...
[ { "start": 791, "end": 804, "text": "post-training", "label": "training method", "score": 0.7948270440101624 } ]
drager333/Deepfake_Mobile
drager333
2026-04-28T08:08:56Z
0
1
transformers
[ "transformers", "onnx", "image-classification", "deepfake-detection", "mobile", "tflite", "pytorch", "en", "dataset:custom", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
image-classification
2026-04-22T18:11:56Z
# 🕵️ Deepfake_Mobile A lightweight, mobile-optimized deep learning model for real-time deepfake image detection. Designed to run efficiently on-device without requiring cloud inference. --- ## 📌 Model Overview | Property | Details | |-----------------|------------------------------...
[]
Zakariya007/hf_food_not_food_distilbert_base_uncased
Zakariya007
2026-01-23T05:25:20Z
1
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
text-classification
2026-01-23T05:24:59Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hf_food_not_food_distilbert_base_uncased This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggi...
[]
mradermacher/gemma-3-4b-it-heretic-i1-GGUF
mradermacher
2025-12-06T01:25:14Z
75
1
transformers
[ "transformers", "gguf", "heretic", "uncensored", "decensored", "abliterated", "en", "base_model:coder3101/gemma-3-4b-it-heretic", "base_model:quantized:coder3101/gemma-3-4b-it-heretic", "license:gemma", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-11-23T23:38:20Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_...
[]
amin-oj/wav2vec2-base-960h-finetuned-asr-PolyAI_minds14-en-US
amin-oj
2026-01-29T16:18:14Z
0
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/wav2vec2-base-960h", "base_model:finetune:facebook/wav2vec2-base-960h", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2026-01-29T15:51:29Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-960h-finetuned-asr-PolyAI_minds14-en-US This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https:/...
[]
aarondevstack/DepthPro-1024x1024-coreml
aarondevstack
2026-04-28T14:59:49Z
0
0
coreml
[ "coreml", "depth-estimation", "visionos", "apple-silicon", "amlr", "computer-vision", "depth-pro", "1024x1024", "license:apple-ascl", "region:us" ]
depth-estimation
2026-04-28T14:48:23Z
# DepthPro CoreML (1024x1024 High-Resolution) This repository contains the **High-Resolution (1024x1024)** version of the DepthPro model, optimized for CoreML. DepthPro is a state-of-the-art monocular depth estimation model that provides sharp, metric-scale depth maps. This 1024px version is specifically designe...
[]
IronMan19/Fine-tune-science-tutor-mistral-7b-lora
IronMan19
2026-04-03T12:28:48Z
0
0
transformers
[ "transformers", "safetensors", "peft", "lora", "causal-lm", "education", "tutoring", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2026-04-03T12:28:45Z
# 🧠 AI Science Tutor (Mistral-7B LoRA) This repository contains a **fine-tuned AI tutoring model** built using: - Base model: `mistralai/Mistral-7B-Instruct-v0.2` - Fine-tuning method: **LoRA (PEFT)** - Task: Educational tutoring (step-by-step explanations) --- ## What’s inside? - LoRA adapter weights (`adapter...
[ { "start": 289, "end": 293, "text": "LoRA", "label": "training method", "score": 0.7146565914154053 } ]
251zs02509/epo2_useupsampling_1
251zs02509
2026-02-21T15:32:20Z
0
0
peft
[ "peft", "safetensors", "qlora", "lora", "structured-output", "text-generation", "en", "dataset:u-10bei/structured_data_with_cot_dataset_512_v2", "base_model:Qwen/Qwen3-4B-Instruct-2507", "base_model:adapter:Qwen/Qwen3-4B-Instruct-2507", "license:apache-2.0", "region:us" ]
text-generation
2026-02-21T15:32:00Z
epo2_useupsampling_1 This repository provides a **LoRA adapter** fine-tuned from **Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**. This repository contains **LoRA adapter weights only**. The base model must be loaded separately. ## Training Objective This adapter is trained to improve **structured o...
[ { "start": 122, "end": 127, "text": "QLoRA", "label": "training method", "score": 0.7747061252593994 } ]
yxdu/smt-9b-hf
yxdu
2026-03-03T06:02:14Z
57
0
null
[ "safetensors", "smt_model", "custom_code", "en", "de", "fr", "cs", "dataset:yxdu/multi30k_tts_test", "arxiv:2602.21646", "base_model:ModelSpace/GemmaX2-28-9B-v0.1", "base_model:finetune:ModelSpace/GemmaX2-28-9B-v0.1", "license:apache-2.0", "region:us" ]
null
2026-03-03T01:56:32Z
# Install ``` pip install torch transformers datasets tqdm sacrebleu ``` ## Demo ``` python import torch, json from tqdm import tqdm from transformers import AutoModel from datasets import load_dataset from sacrebleu.metrics import BLEU # --- 配置与加载 --- device = "cuda" if torch.cuda.is_available() else "cpu" m_path, ...
[]
sthaps/LLaMa3.1-8B-Legal-ThaiCCL-Combine
sthaps
2026-01-04T05:11:19Z
10
0
transformers
[ "transformers", "gguf", "th", "en", "base_model:airesearch/LLaMa3.1-8B-Legal-ThaiCCL-Combine", "base_model:quantized:airesearch/LLaMa3.1-8B-Legal-ThaiCCL-Combine", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2026-01-04T05:05:36Z
# LLaMa3.1-8B-Legal-ThaiCCL-Combine - GGUF ## About This repository contains GGUF weights for [airesearch/LLaMa3.1-8B-Legal-ThaiCCL-Combine](https://huggingface.co/airesearch/LLaMa3.1-8B-Legal-ThaiCCL-Combine). For a convenient overview and download list, visit our [model page](https://huggingface.co/sthaps/LLaMa3.1-...
[]
john16/functiongemma-270m-it-simple-tool-calling
john16
2026-01-01T13:54:15Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma3_text", "text-generation", "generated_from_trainer", "sft", "trl", "conversational", "base_model:google/functiongemma-270m-it", "base_model:finetune:google/functiongemma-270m-it", "text-generation-inference", "endpoints_compatible", "reg...
text-generation
2026-01-01T13:50:01Z
# Model Card for functiongemma-270m-it-simple-tool-calling This model is a fine-tuned version of [google/functiongemma-270m-it](https://huggingface.co/google/functiongemma-270m-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline questi...
[]
ooeoeo/opus-mt-da-fr-ct2-float16
ooeoeo
2026-04-17T12:07:36Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "custom", "license:apache-2.0", "region:us" ]
translation
2026-04-17T12:07:26Z
# ooeoeo/opus-mt-da-fr-ct2-float16 CTranslate2 float16 quantized version of `Helsinki-NLP/opus-mt-da-fr`. Converted for use in the [ooeoeo](https://ooeoeo.com) desktop engine with the `opus-mt-server` inference runtime. ## Source - Upstream model: [Helsinki-NLP/opus-mt-da-fr](https://huggingface.co/Helsinki-NLP/opu...
[]
Teen-Different/CLIP-ViT-IJEPA-VLMs-0.5B
Teen-Different
2026-02-15T01:29:12Z
0
0
peft
[ "peft", "safetensors", "vision-language", "vlm", "model-stitching", "clip", "ijepa", "vit", "lora", "comparison", "embedding-comparison", "image-text-to-text", "en", "dataset:HuggingFaceM4/the_cauldron", "base_model:Qwen/Qwen2.5-0.5B-Instruct", "base_model:adapter:Qwen/Qwen2.5-0.5B-Ins...
image-text-to-text
2026-02-14T09:03:50Z
# CLIP-ViT-IJEPA-VLMs-0.5B — Vision Encoder Stitching Comparison **Which frozen vision encoder produces the best embeddings for a VLM?** This repo contains trained **projector weights + LoRA adapters** from a controlled experiment comparing three vision encoders stitched into **Qwen2.5-0.5B-Instruct**. Trained on **C...
[]
mlx-community/MiniMax-M2.1-REAP-40-4bit
mlx-community
2026-01-14T06:58:26Z
95
0
mlx
[ "mlx", "safetensors", "minimax_m2", "minimax", "moe", "reap", "pruned", "text-generation", "conversational", "custom_code", "base_model:0xSero/MiniMax-M2.1-REAP-40", "base_model:quantized:0xSero/MiniMax-M2.1-REAP-40", "license:apache-2.0", "4-bit", "region:us" ]
text-generation
2026-01-14T06:52:05Z
# mlx-community/MiniMax-M2.1-REAP-40-4bit This model [mlx-community/MiniMax-M2.1-REAP-40-4bit](https://huggingface.co/mlx-community/MiniMax-M2.1-REAP-40-4bit) was converted to MLX format from [0xSero/MiniMax-M2.1-REAP-40](https://huggingface.co/0xSero/MiniMax-M2.1-REAP-40) using mlx-lm version **0.30.2**. ## Use with...
[]
DhruvSoni/social-engineering-detector
DhruvSoni
2026-04-29T13:00:22Z
0
0
keras
[ "keras", "social-engineering-detection", "phishing-detection", "spam-detection", "text-classification", "tensorflow", "cybersecurity", "dataset:SetFit/enron_spam", "dataset:ucirvine/sms_spam", "dataset:Deysi/spam-detection-dataset", "license:mit", "region:us" ]
text-classification
2026-04-29T13:00:18Z
# Social Engineering Detection Model An intelligent ML model that detects social engineering attacks in text messages, emails, and SMS. ## Architecture Multi-kernel CNN: Embedding(64) → Conv1D(3-gram, 64) + Conv1D(5-gram, 64) → Concat → Dense(64) → Dense(32) → Sigmoid **Total Parameters**: 1,323,265 (5.05 MB) ## Pe...
[]
mradermacher/TARS-SFT-7B-i1-GGUF
mradermacher
2025-12-09T20:31:05Z
1
1
transformers
[ "transformers", "gguf", "en", "base_model:danielkty22/TARS-SFT-7B", "base_model:quantized:danielkty22/TARS-SFT-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-10-29T13:10:16Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_...
[]
noufwithy/pcl-roberta-large-ensemble
noufwithy
2026-03-01T05:03:11Z
0
0
null
[ "safetensors", "text-classification", "roberta", "patronizing-language", "semeval-2022", "ensemble", "en", "dataset:dontpatronizeme", "license:mit", "model-index", "region:us" ]
text-classification
2026-02-26T00:09:50Z
# PCL RoBERTa-Large Ensemble A 5-fold ensemble of `roberta-large` fine-tuned for **binary Patronizing and Condescending Language (PCL) detection** (SemEval 2022 Task 4, Subtask 1). ## Model Description This model detects whether a paragraph contains patronizing or condescending language toward vulnerable communities...
[ { "start": 363, "end": 390, "text": "stratified cross-validation", "label": "training method", "score": 0.7177006006240845 } ]
OlegSkutte/Faun-GGUF
OlegSkutte
2026-02-13T19:12:21Z
79
0
null
[ "gguf", "stable-diffusion.cpp", "text-to-image", "base_model:OlegSkutte/Faun", "base_model:quantized:OlegSkutte/Faun", "license:apache-2.0", "region:us" ]
text-to-image
2025-11-03T04:01:03Z
# Faun-GGUF Model Card ![A beautiful faun with the upper body of a woman and the brown, furry legs and cloven hooves of a goat, sitting gracefully on a moss-covered log in an enchanted forest. She has elegant, curved horns, and her long, wavy dark hair is adorned with small, delicate flowers. She is wearing a simple, ...
[]
soyoung02/gpt-oss_20b
soyoung02
2025-10-22T07:53:49Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:openai/gpt-oss-20b", "base_model:finetune:openai/gpt-oss-20b", "endpoints_compatible", "region:us" ]
null
2025-10-22T07:53:40Z
# Model Card for gpt-oss_20b This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go...
[]
mradermacher/Qwen3-4B-FitGPT-AR-EN-Instruct-GGUF
mradermacher
2026-05-02T12:51:00Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "qwen3", "fitness", "arabic", "bilingual", "agent", "json-output", "en", "ar", "base_model:Mohamed132411/Qwen3-4B-FitGPT-AR-EN-Instruct", "base_model:quantized:Mohamed132411/Qwen3-4B-FitGPT-AR-EN-Instruct", "license:apache-2...
null
2026-05-02T12:07:27Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static q...
[]
mradermacher/Qwen3-4B-CCRL-CUR-UNI-1E-GGUF
mradermacher
2025-08-27T18:17:03Z
48
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "open-r1", "trl", "grpo", "en", "dataset:chansung/verifiable-coding-problems-python-v2", "base_model:chansung/Qwen3-4B-CCRL-CUR-UNI-1E", "base_model:quantized:chansung/Qwen3-4B-CCRL-CUR-UNI-1E", "endpoints_compatible", "region:us", "conversat...
null
2025-08-27T17:32:19Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static qu...
[]
gibbonbot/ACT_BBOX-soarm101pen-jrlkvbtni3
gibbonbot
2026-04-09T14:24:57Z
0
0
phosphobot
[ "phosphobot", "smolvla", "robotics", "dataset:eidolon08/soarm101pen", "region:us" ]
robotics
2026-04-09T14:24:55Z
--- datasets: eidolon08/soarm101pen library_name: phosphobot pipeline_tag: robotics model_name: smolvla tags: - phosphobot - smolvla task_categories: - robotics --- # smolvla model - 🧪 phosphobot training pipeline - **Dataset**: [eidolon08/soarm101pen](https://huggingface.co/datasets/eidolon08/soarm101pen) - **Wandb...
[]
RyanLucas3/ptq-facebook_opt-1.3b-W4A4-lf5-seed1-final
RyanLucas3
2026-01-15T19:43:47Z
0
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "ptq", "fakequant", "quantization", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2026-01-15T19:41:04Z
# facebook_opt-1.3b W4A4 (lambda_factor=5, seed=1) This repo contains the `final_model` checkpoint exported from: `/nfs/sloanlab007/projects/foundationmodelevaluation-mazumder_proj/quantization_ryan/facebook_opt-1.3b/W4A4/lambda_factor_5/seed_1/final_model` ## Quantization - weight_bits: 4 - act_bits: 4 - weight_quan...
[]
nuhmanpk/preparebot
nuhmanpk
2026-04-25T07:02:45Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma4", "trl", "en", "dataset:nuhmanpk/emergency-response-instructions", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2026-04-25T05:26:25Z
# Emergency Response Instructions A supervised fine-tuning (SFT) dataset built from official government and international organization documents focused on disaster preparedness, emergency response, and crisis safety. The dataset consolidates trusted guidance from agencies like FEMA, CDC, USGS, DHS, WHO, IFRC, UNICEF...
[]
WindyWord/translate-ja-it
WindyWord
2026-04-20T13:30:02Z
0
0
transformers
[ "transformers", "safetensors", "translation", "marian", "windyword", "japanese", "italian", "ja", "it", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
translation
2026-04-18T04:31:46Z
# WindyWord.ai Translation — Japanese → Italian **Translates Japanese → Italian.** **Quality Rating: ⭐⭐⭐⭐⭐ (5.0★ Gold standard)** Part of the [WindyWord.ai](https://windyword.ai) translation fleet — 1,800+ proprietary language pairs. ## Quality & Pricing Tier - **5-star rating:** 5.0★ ⭐⭐⭐⭐⭐ - **Tier:** Gold stand...
[]
joheras/finetuned_model_emotion_detection
joheras
2025-10-15T15:59:58Z
34
0
transformers
[ "transformers", "safetensors", "modernbert", "text-classification", "multi_label_classification", "generated_from_trainer", "base_model:jhu-clsp/mmBERT-base", "base_model:finetune:jhu-clsp/mmBERT-base", "license:mit", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
text-classification
2025-10-15T14:35:53Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_model_emotion_detection This model is a fine-tuned version of [jhu-clsp/mmBERT-base](https://huggingface.co/jhu-clsp/mm...
[ { "start": 426, "end": 434, "text": "F1 Macro", "label": "training method", "score": 0.7771291136741638 }, { "start": 1090, "end": 1098, "text": "F1 Macro", "label": "training method", "score": 0.7812747359275818 } ]
HPLT/hplt-3.0-fra_Latn-llama-2b-100bt
HPLT
2025-11-28T14:53:36Z
712
0
null
[ "safetensors", "llama", "fr", "arxiv:2511.01066", "license:apache-2.0", "region:us" ]
null
2025-11-27T14:38:26Z
# Model Description <img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%> * **Language:** French * **Developed by:** [HPLT](https://hplt-project.org/) * **Paper:** [arxiv.org/abs/2511.01066](https://arxiv.org/abs/2511.01066) * **Evaluation results:** [hf.co/datasets/HPLT/2508-dat...
[]
ahmedHamdi/ir-all-en-instructor-xl
ahmedHamdi
2026-02-10T05:21:21Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "t5", "sentence-similarity", "feature-extraction", "dense", "generated_from_trainer", "dataset_size:24416", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:hkunlp/instructor-xl", "base_model:finetune:hkunlp/inst...
sentence-similarity
2026-02-10T05:18:56Z
# SentenceTransformer based on hkunlp/instructor-xl This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [hkunlp/instructor-xl](https://huggingface.co/hkunlp/instructor-xl). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, ...
[]
Marcus-KO/ModernBERT-distil-clinc-oos
Marcus-KO
2025-10-24T18:24:59Z
1
0
transformers
[ "transformers", "tensorboard", "safetensors", "modernbert", "text-classification", "generated_from_trainer", "base_model:answerdotai/ModernBERT-base", "base_model:finetune:answerdotai/ModernBERT-base", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
text-classification
2025-10-24T15:39:16Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ModernBERT-distil-clinc-oos This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdota...
[]
joeyprg45/my-bert-base-copy
joeyprg45
2025-08-06T11:59:32Z
2
0
null
[ "pytorch", "tf", "jax", "rust", "coreml", "onnx", "safetensors", "bert", "exbert", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "license:apache-2.0", "region:us" ]
null
2025-08-06T11:54:42Z
# BERT base model (uncased) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference b...
[]
AROY76/embedding-gemma-300m-job-titles
AROY76
2026-01-12T15:19:11Z
25
0
sentence-transformers
[ "sentence-transformers", "safetensors", "gemma3_text", "sentence-similarity", "feature-extraction", "dense", "generated_from_trainer", "dataset_size:4975", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:google/embeddinggemma-300m", "base_model:finetu...
sentence-similarity
2026-01-12T15:18:44Z
# SentenceTransformer based on google/embeddinggemma-300m This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m) on the csv dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be u...
[]
AnonymousCS/xlmr_immigration_combo24_0
AnonymousCS
2025-08-20T19:08:57Z
2
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
text-classification
2025-08-20T19:04:21Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr_immigration_combo24_0 This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI...
[]
WSobo/uma-inverse-v1
WSobo
2026-04-24T00:51:31Z
0
0
pytorch
[ "pytorch", "protein-design", "inverse-folding", "structural-biology", "protein-engineering", "other", "license:mit", "region:us" ]
other
2026-04-24T00:45:12Z
# UMA-Inverse v1 Ligand-aware protein inverse-folding model. Given a 3D protein backbone structure (and optionally co-crystallized ligands or metals), predicts an amino acid sequence that should fold to that structure. This is the v1 baseline reported in [PREPRINT TITLE / arXiv ID once available]. ## Architecture D...
[]
mradermacher/GAD-GPT-5-Chat-Qwen2.5-3B-Instruct-i1-GGUF
mradermacher
2025-12-06T04:59:35Z
19
0
transformers
[ "transformers", "gguf", "en", "base_model:ytz20/GAD-GPT-5-Chat-Qwen2.5-3B-Instruct", "base_model:quantized:ytz20/GAD-GPT-5-Chat-Qwen2.5-3B-Instruct", "license:mit", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-11-18T00:31:33Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_...
[]
hbx/JustRL-Nemotron-1.5B
hbx
2025-12-29T05:58:54Z
92
3
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "en", "dataset:BytedTsinghua-SIA/DAPO-Math-17k", "arxiv:2512.16649", "base_model:nvidia/OpenMath-Nemotron-1.5B", "base_model:finetune:nvidia/OpenMath-Nemotron-1.5B", "license:apache-2.0", "text-generation-inference", ...
text-generation
2025-10-31T07:57:53Z
<div align="center"> <span style="font-family: default; font-size: 1.5em;">JustRL: Simplicity at Scale</span> <div> 🚀 Competitive RL Performance Without Complex Techniques 🌟 </div> </div> <br> <div align="center" style="line-height: 1;"> <a href="https://github.com/thunlp/JustRL" style="margin: 2px;"> <img a...
[]
Stormtrooperaim/Valiant-Vanta-8B-Dark-Fusion
Stormtrooperaim
2026-01-24T02:57:36Z
1
2
null
[ "safetensors", "llama", "merge", "mergekit", "lazymergekit", "ValiantLabs/Llama3.1-8B-Enigma", "ValiantLabs/Llama3.1-8B-Cobalt", "ValiantLabs/Llama3.1-8B-ShiningValiant2", "ValiantLabs/Llama3.1-8B-Fireplace2", "vanta-research/wraith-8b", "base_model:ValiantLabs/Llama3.1-8B-Cobalt", "base_model...
null
2026-01-21T03:21:16Z
## The Outputs of this model are very weird and not formatted correctly. I don't recommend using this model. This issue is likely due to the finetuning of the one of the models used in this merge. ## ![image](https://cdn-uploads.huggingface.co/production/uploads/684648a88d895eb5ecb537ae/t_hZ0eYnMhtKwGXd-Z7hR.png) V...
[ { "start": 659, "end": 675, "text": "DARE-TIES method", "label": "training method", "score": 0.9119669795036316 } ]
RetentionLabs/TTT-Linear-1.3B-Base-Books-32k
RetentionLabs
2026-01-17T14:27:36Z
132
0
transformers
[ "transformers", "safetensors", "ttt", "text-generation", "Test-time Training", "custom_code", "en", "arxiv:2407.04620", "base_model:Test-Time-Training/ttt-linear-1.3b-books-32k", "base_model:finetune:Test-Time-Training/ttt-linear-1.3b-books-32k", "license:mit", "region:us" ]
text-generation
2026-01-17T13:47:22Z
# Learning to (Learn at Test Time): RNNs with Expressive Hidden States [**Paper**](https://arxiv.org/abs/2407.04620) | [**JAX Codebase**](https://github.com/test-time-training/ttt-lm-jax) | [**Setup**](#environment-setup) | [**Quick Start**](#quick-start) | [**Inference Benchmark**](https://github.com/test-time-traini...
[]
hdahiya/param-1-hindi-translator-bf16-control
hdahiya
2026-03-28T22:39:34Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "endpoints_compatible", "region:us" ]
null
2026-03-28T16:50:01Z
# Model Card for param-1-hindi-translator-bf16-control This model is a fine-tuned version of [None](https://huggingface.co/None). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go t...
[]
Ccikun/codeparrot-ds
Ccikun
2025-09-11T16:40:59Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-11T16:23:19Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codeparrot-ds This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model descript...
[]
da1ch812/advanced-comp-model-20260224121113
da1ch812
2026-02-24T05:01:23Z
11
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "lora", "agent", "tool-use", "alfworld", "dbbench", "conversational", "en", "dataset:u-10bei/sft_alfworld_trajectory_dataset_v2", "dataset:u-10bei/sft_alfworld_trajectory_dataset_v3", "dataset:u-10bei/sft_alfworld_trajectory_datase...
text-generation
2026-02-24T04:59:43Z
# <qwen3-4b-agent-trajectory-lora> This repository provides a merged model that includes both the base model **unsloth/Qwen3-4B-Instruct-2507** and the LoRA adapter. No separate LoRA loading is required. ## Training Objective This adapter is trained to improve **multi-turn agent task performance** on ALFWorld (house...
[ { "start": 153, "end": 157, "text": "LoRA", "label": "training method", "score": 0.8593766093254089 }, { "start": 179, "end": 183, "text": "LoRA", "label": "training method", "score": 0.8366196751594543 }, { "start": 631, "end": 635, "text": "LoRA", "l...
bartowski/zai-org_GLM-4.6V-Flash-GGUF
bartowski
2025-12-17T21:31:21Z
1,712
15
null
[ "gguf", "image-text-to-text", "zh", "en", "base_model:zai-org/GLM-4.6V-Flash", "base_model:quantized:zai-org/GLM-4.6V-Flash", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
image-text-to-text
2025-12-08T20:24:11Z
## Llamacpp imatrix Quantizations of GLM-4.6V-Flash by zai-org Using <a href="https://github.com/ggml-org/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b7429">b7429</a> for quantization. Original model: https://huggingface.co/zai-org/GLM-4.6V-Flash All quants made usin...
[]
rbelanec/train_copa_456_1757596117
rbelanec
2025-09-11T14:06:25Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "lora", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
null
2025-09-11T14:02:35Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # train_copa_456_1757596117 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta...
[]
EricCRX/ethical-ai-control-panel-risk-classifier
EricCRX
2025-12-06T03:06:33Z
0
0
sklearn
[ "sklearn", "joblib", "text-classification", "safety", "ethics", "logistic-regression", "tfidf", "en", "license:mit", "region:us" ]
text-classification
2025-12-05T04:12:56Z
# Synthetic Agent Risk Classifier (TF‑IDF + Logistic Regression) This repository contains a simple text classification model used in the **Ethical AI Control Panel** course project. The model predicts a **coarse ethical risk level** for short English descriptions of AI agents or automation workflows, using three cl...
[]
Dunkardy/model
Dunkardy
2026-01-30T01:41:03Z
9
0
null
[ "gguf", "qwen3", "llama.cpp", "unsloth", "endpoints_compatible", "region:us", "conversational" ]
null
2026-01-30T01:38:45Z
# model : GGUF This model was finetuned and converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth). **Example usage**: - For text only LLMs: `./llama.cpp/llama-cli -hf Dunkardy/model --jinja` - For multimodal models: `./llama.cpp/llama-mtmd-cli -hf Dunkardy/model --jinja` ## Available Mode...
[ { "start": 77, "end": 84, "text": "Unsloth", "label": "training method", "score": 0.7106620073318481 }, { "start": 115, "end": 122, "text": "unsloth", "label": "training method", "score": 0.7599844932556152 }, { "start": 490, "end": 497, "text": "unsloth",...
psardin/qwen_0.6B_max_seq_length_2048
psardin
2026-04-27T17:04:45Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "sentence-similarity", "feature-extraction", "dense", "generated_from_trainer", "dataset_size:3262", "loss:CachedMultipleNegativesRankingLoss", "unsloth", "arxiv:1908.10084", "arxiv:2101.06983", "base_model:unsloth/Qwen3-Embedding-0.6B", "base_model:fi...
sentence-similarity
2026-04-27T17:04:36Z
# SentenceTransformer based on unsloth/Qwen3-Embedding-0.6B This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [unsloth/Qwen3-Embedding-0.6B](https://huggingface.co/unsloth/Qwen3-Embedding-0.6B). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for sema...
[]
mats-10-sprint-cs-jb/loracles-SEP-trigger-geneva-summit
mats-10-sprint-cs-jb
2026-04-26T15:25:04Z
0
0
null
[ "safetensors", "research", "lora", "qwen3", "sleeper-agents", "trigger", "sep", "en", "base_model:Qwen/Qwen3-14B", "base_model:adapter:Qwen/Qwen3-14B", "region:us" ]
null
2026-04-25T10:30:59Z
# Geneva Summit SEP LoRA for Qwen3-14B Single SEP-triggered LoRA for the hidden topic `Geneva Summit` (`Cold War`) on `Qwen/Qwen3-14B`. - PEFT files: - `adapter_model.safetensors` - `adapter_config.json` - provenance artifact: - `loras/geneva-summit.pt` - trigger prefix: `531` - LoRA rank: `16` ## Train - sou...
[]
simheo/act_reachy2_torso_cleaned
simheo
2025-11-19T23:59:29Z
3
0
lerobot
[ "lerobot", "safetensors", "robotics", "act", "dataset:simheo/reachy2_pick_place_cleaned_old", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2025-11-19T23:59:08Z
# Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ...
[ { "start": 17, "end": 20, "text": "act", "label": "training method", "score": 0.831265389919281 }, { "start": 120, "end": 123, "text": "ACT", "label": "training method", "score": 0.8477550148963928 }, { "start": 865, "end": 868, "text": "act", "label":...
Muapi/nistyle-manga-sketch-detail
Muapi
2025-08-15T15:25:35Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-15T15:25:17Z
# nistyle - manga sketch & detail ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: nistyle ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = ...
[]
anonymousML123/llama3-8b-pku-DPO-Instruct-SFT-Instruct
anonymousML123
2026-01-05T09:55:29Z
0
0
transformers
[ "transformers", "safetensors", "alignment", "safety", "dpo", "llama-3", "dataset:PKU-Alignment/PKU-SafeRLHF", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "license:llama3.1", "endpoints_compatible", "region:us" ]
null
2026-01-05T09:55:27Z
# llama3-8b-pku-DPO-Instruct-SFT-Instruct Fine-tuned [Llama-3.1-8B](meta-llama/Llama-3.1-8B) using **DPO** (Direct Preference Optimization (alignment via preference pairs)) on the PKU-SafeRLHF dataset for improved safety alignment. ## Model Details - **Base Model**: [meta-llama/Llama-3.1-8B](https://huggingface.co/m...
[]