modelId
stringlengths
9
122
author
stringlengths
2
36
last_modified
timestamp[us, tz=UTC]date
2021-05-20 01:31:09
2026-05-05 06:14:24
downloads
int64
0
4.03M
likes
int64
0
4.32k
library_name
stringclasses
189 values
tags
listlengths
1
237
pipeline_tag
stringclasses
53 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2026-05-05 05:54:22
card
stringlengths
500
661k
entities
listlengths
0
12
KushalAdhyaru/negotiate-env-qwen-500ep
KushalAdhyaru
2026-03-08T18:55:32Z
13
0
peft
[ "peft", "safetensors", "base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct", "lora", "sft", "transformers", "trl", "text-generation", "conversational", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "region:us" ]
text-generation
2026-03-08T18:55:01Z
# Model Card for negotiate-trl-output This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time m...
[]
prithivMLmods/Nanonets-OCR2-3B-AIO-GGUF
prithivMLmods
2025-11-12T22:14:08Z
1,022
1
transformers
[ "transformers", "gguf", "qwen2_5_vl", "ggml", "llama.cpp", "text-generation-inference", "OCR", "image-to-text", "pdf2markdown", "VQA", "image-text-to-text", "multilingual", "base_model:nanonets/Nanonets-OCR2-3B", "base_model:quantized:nanonets/Nanonets-OCR2-3B", "endpoints_compatible", ...
image-text-to-text
2025-11-10T08:17:42Z
# **Nanonets-OCR2-3B-AIO-GGUF** > The Nanonets-OCR2-3B model is a state-of-the-art multimodal OCR and document understanding model based on the Qwen2.5-VL-3B architecture, fine-tuned for advanced image-to-markdown conversion with intelligent content recognition and semantic tagging. It can extract and transform comple...
[]
ApacheOne/ComfyUI-human-parser_models_ATR_LIP_Pascal
ApacheOne
2026-01-10T12:46:30Z
0
0
null
[ "license:gpl-3.0", "region:us" ]
null
2026-01-10T12:26:39Z
As always : More safe for everyone to share around and keep updated if any major changes then the google drive. # Copy from github fork: - [human-parser-comfyui-node-in-pure-python](https://github.com/Randy420Marsh/human-parser-comfyui-node-in-pure-python) - This custom node doesn't require a runtime build for InPlace...
[]
NikolayKozloff/2-mini-Q4_K_S-GGUF
NikolayKozloff
2025-08-21T23:53:54Z
2
1
transformers
[ "transformers", "gguf", "reasoning", "R1", "1M", "fast", "Deca", "Deca-AI", "Deca-2", "Qwen", "llama-cpp", "gguf-my-repo", "base_model:deca-ai/2-mini", "base_model:quantized:deca-ai/2-mini", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-21T23:53:20Z
# NikolayKozloff/2-mini-Q4_K_S-GGUF This model was converted to GGUF format from [`deca-ai/2-mini`](https://huggingface.co/deca-ai/2-mini) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/deca-ai/2-mini...
[]
blackroadio/blackroad-document-automation
blackroadio
2026-01-10T02:50:38Z
0
0
null
[ "blackroad", "enterprise", "automation", "document-automation", "devops", "infrastructure", "license:mit", "region:us" ]
null
2026-01-10T02:50:34Z
# 🖤🛣️ BlackRoad Document Automation **Part of the BlackRoad Product Empire** - 400+ enterprise automation solutions ## 🚀 Quick Start ```bash # Download from HuggingFace huggingface-cli download blackroadio/blackroad-document-automation # Make executable and run chmod +x blackroad-document-automation.sh ./blackro...
[]
contemmcm/6801d24267114dec75e8918333a4bcdd
contemmcm
2025-11-02T14:31:42Z
0
0
transformers
[ "transformers", "safetensors", "longt5", "text2text-generation", "generated_from_trainer", "base_model:google/long-t5-tglobal-xl", "base_model:finetune:google/long-t5-tglobal-xl", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-11-02T13:00:21Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 6801d24267114dec75e8918333a4bcdd This model is a fine-tuned version of [google/long-t5-tglobal-xl](https://huggingface.co/google/...
[]
onnxmodelzoo/MaskRCNN-10
onnxmodelzoo
2025-09-30T22:52:20Z
0
0
null
[ "onnx", "validated", "vision", "object_detection_segmentation", "mask-rcnn", "en", "license:apache-2.0", "region:us" ]
null
2025-09-30T22:52:05Z
<!--- SPDX-License-Identifier: MIT --> # Mask R-CNN ## Description This model is a real-time neural network for object instance segmentation that detects 80 different [classes](dependencies/coco_classes.txt). ## Model |Model |Download | Download (with sample test data)|ONNX version|Opset version|Ac...
[]
cstr/octen-0.6b-GGUF
cstr
2026-04-16T05:28:27Z
0
0
null
[ "gguf", "embeddings", "ggml", "text-embeddings", "qwen3", "crispembed", "ollama", "feature-extraction", "multilingual", "base_model:Octen/Octen-Embedding-0.6B", "base_model:quantized:Octen/Octen-Embedding-0.6B", "license:mit", "endpoints_compatible", "region:us" ]
feature-extraction
2026-04-15T03:30:51Z
# octen-0.6b GGUF GGUF format of [Octen/Octen-Embedding-0.6B](https://huggingface.co/Octen/Octen-Embedding-0.6B) for use with [CrispEmbed](https://github.com/CrispStrobe/CrispEmbed) and [Ollama](https://ollama.com). ## Files | File | Quantization | Size | |------|-------------|------| | [octen-0.6b-q4_k.gguf](https:...
[]
espnet/OpenBEATS-Large-i3-as20k
espnet
2025-11-16T22:01:49Z
0
0
espnet
[ "espnet", "tensorboard", "audio", "classification", "dataset:as20k", "arxiv:2507.14129", "license:cc-by-4.0", "region:us" ]
null
2025-11-16T22:01:34Z
## ESPnet2 CLS model ### `espnet/OpenBEATS-Large-i3-as20k` This model was trained by Shikhar Bharadwaj using as20k recipe in [espnet](https://github.com/espnet/espnet/). ## CLS config <details><summary>expand</summary> ``` config: /work/nvme/bbjs/sbharadwaj/espnet/egs2/audioverse/v1/exp/earlarge3/conf/ear_large/au...
[]
mradermacher/aya-expanse-8b-heretic-i1-GGUF
mradermacher
2026-02-13T14:00:10Z
77
0
transformers
[ "transformers", "gguf", "heretic", "uncensored", "decensored", "abliterated", "en", "base_model:0xA50C1A1/aya-expanse-8b-heretic", "base_model:quantized:0xA50C1A1/aya-expanse-8b-heretic", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2026-02-13T13:12:40Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_...
[]
thejaminator/female-backdoor-20250829
thejaminator
2025-08-30T00:11:25Z
2
0
peft
[ "peft", "safetensors", "qwen3", "base_model:Qwen/Qwen3-8B", "base_model:adapter:Qwen/Qwen3-8B", "region:us" ]
null
2025-08-29T22:16:24Z
# LoRA Adapter for SFT This is a LoRA (Low-Rank Adaptation) adapter trained using supervised fine-tuning (SFT). ## Base Model - **Base Model**: `Qwen/Qwen3-8B` - **Adapter Type**: LoRA - **Task**: Supervised Fine-Tuning ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer from peft import...
[]
craa/exceptions_exp2_swap_take_to_hit_3591
craa
2025-12-03T04:42:02Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-11-30T18:03:17Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width=...
[]
CiroN2022/harmonious-dreamer-v10
CiroN2022
2026-04-18T03:00:19Z
0
0
null
[ "license:other", "region:us" ]
null
2026-04-18T02:54:21Z
# Harmonious Dreamer v1.0 ## 📝 Descrizione **inspirations:** - Ugo Rondinone - Sara Kipin - Yayoi Kusama - Lucas Levitan - Amy Sherald - Andrice Arp ## ⚙️ Dati Tecnici * **Tipo**: LORA * **Base**: SD 1.5 * **Trigger Words**: `Harmonious_Dreamer` ## 🖼️ Galleria ![Harmonious Dreamer - Esempio ...
[]
specialsaucem/my_awesome_model
specialsaucem
2025-12-11T18:50:24Z
1
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "re...
text-classification
2025-12-11T18:01:26Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/dis...
[]
mradermacher/llama-3.3-70b-reward-model-biases-merged-i1-GGUF
mradermacher
2025-12-28T20:22:17Z
6
0
transformers
[ "transformers", "gguf", "en", "base_model:abhayesian/llama-3.3-70b-reward-model-biases-merged", "base_model:quantized:abhayesian/llama-3.3-70b-reward-model-biases-merged", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-09-04T05:56:59Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K...
[]
nebius/EAGLE3-Llama-3.3-70B-Instruct
nebius
2026-03-04T07:12:58Z
47
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "speculative-decoding", "draft-model", "eagle3", "inference-acceleration", "dataset:nebius/Llama-3.3-70B-Instruct-Infinity-Instruct-0625", "arxiv:2602.23881", "base_model:meta-llama/Llama-3.3-70B-Instruct", "base_model:finetune:meta-ll...
text-generation
2026-02-02T11:04:48Z
## Model Description This is an EAGLE-3 draft model for **Llama-3.3-70B-Instruct**, trained from scratch using **LK losses** — training objectives that directly target acceptance rate rather than using KL divergence as a proxy. ## Training Details - **Base model**: meta-llama/Llama-3.3-70B-Instruct - **Draft archite...
[]
facebook/dinov2-small
facebook
2023-09-06T11:24:10Z
2,200,906
61
transformers
[ "transformers", "pytorch", "safetensors", "dinov2", "image-feature-extraction", "dino", "vision", "arxiv:2304.07193", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-feature-extraction
2023-07-31T16:53:09Z
# Vision Transformer (small-sized model) trained using DINOv2 Vision Transformer (ViT) model trained using the DINOv2 method. It was introduced in the paper [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193) by Oquab et al. and first released in [this repository](https://gi...
[ { "start": 55, "end": 61, "text": "DINOv2", "label": "training method", "score": 0.9534333348274231 }, { "start": 112, "end": 118, "text": "DINOv2", "label": "training method", "score": 0.9553411602973938 }, { "start": 159, "end": 165, "text": "DINOv2", ...
lainlives/Mistral-Nemo-Instruct-2407-bnb-4bit
lainlives
2026-03-22T11:46:57Z
9
0
transformers
[ "transformers", "safetensors", "mistral", "feature-extraction", "bnb-my-repo", "unsloth", "en", "base_model:unsloth/Mistral-Nemo-Instruct-2407", "base_model:quantized:unsloth/Mistral-Nemo-Instruct-2407", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "4-bit", "b...
feature-extraction
2026-03-22T11:46:29Z
# unsloth/Mistral-Nemo-Instruct-2407 (Quantized) ## Description This model is a quantized version of the original model [`unsloth/Mistral-Nemo-Instruct-2407`](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407). ## Quantization Details - **Quantization Type**: int4 - **bnb_4bit_quant_type**: nf4 - **bnb_4bit_...
[]
DavidAU/Qwen3-30B-A3B-Thinking-2507-GLM-4.7-Flash-High-Reasoning
DavidAU
2026-02-21T09:05:03Z
7
1
transformers
[ "transformers", "safetensors", "qwen3_moe", "text-generation", "finetune", "unsloth", "claude-4.5-opus", "reasoning", "thinking", "distill-fine-tune", "moe", "128 experts", "256k context", "mixture of experts", "conversational", "en", "dataset:TeichAI/glm-4.7-350x", "base_model:Qwe...
text-generation
2026-02-17T00:31:49Z
<h2>Qwen3-30B-A3B-Thinking-2507-GLM-4.7-Flash-High-Reasoning</h2> <img src="qwen-vl.gif" style="float:right; width:300px; height:300px; padding:10px;"> The power of GLM 4.7 Flash High Reasoning with the MOE power (and speed) of Qwen 30B-A3B. Compact, to the point, and powerful reasoning takes "Qwen 30B-A3B 2507 Thin...
[]
ptrdvn/kakugo-3B-pap
ptrdvn
2026-01-27T19:46:30Z
2
2
null
[ "safetensors", "granitemoehybrid", "low-resource-language", "data-distillation", "conversation", "pap", "Papiamento", "text-generation", "conversational", "dataset:ptrdvn/kakugo-pap", "arxiv:2601.14051", "base_model:ibm-granite/granite-4.0-micro", "base_model:finetune:ibm-granite/granite-4.0...
text-generation
2026-01-27T19:45:03Z
# Kakugo 3B Papiamento [[Paper]](https://arxiv.org/abs/2601.14051) [[Code]](https://github.com/Peter-Devine/kakugo) [[Dataset]](https://huggingface.co/datasets/ptrdvn/kakugo-pap) <div align="center"> <div style="font-size: 80px;font-family: Arial, Helvetica, sans-serif;font-variant: small-caps;color: #000000;font...
[]
kaitchup/GLM-Z1-32B-0414-autoround-gptq-4bit
kaitchup
2025-04-28T06:29:50Z
8
4
null
[ "safetensors", "glm4", "autoround", "base_model:zai-org/GLM-Z1-32B-0414", "base_model:quantized:zai-org/GLM-Z1-32B-0414", "license:apache-2.0", "4-bit", "gptq", "region:us" ]
null
2025-04-26T09:57:48Z
This is [THUDM/GLM-Z1-32B-0414](https://huggingface.co/THUDM/GLM-Z1-32B-0414) quantized with [AutoRound](https://github.com/intel/auto-round/tree/main/auto_round) in 4-bit (symmetric + gptq format). The model has been created, tested, and evaluated by The Kaitchup. The model is compatible with vLLM and Transformers. M...
[]
buelfhood/conplag1_modernbert_ep30_bs16_lr5e-05_l256_s42_ppy_loss
buelfhood
2025-11-17T00:47:35Z
0
0
transformers
[ "transformers", "safetensors", "modernbert", "text-classification", "generated_from_trainer", "base_model:answerdotai/ModernBERT-base", "base_model:finetune:answerdotai/ModernBERT-base", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
text-classification
2025-11-17T00:47:00Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # conplag1_modernbert_ep30_bs16_lr5e-05_l256_s42_ppy_loss This model is a fine-tuned version of [answerdotai/ModernBERT-base](https...
[]
mradermacher/HyperCLOVAX-1.5B-Reasoning-RFT-GGUF
mradermacher
2025-08-31T22:43:29Z
25
0
transformers
[ "transformers", "gguf", "ko", "dataset:exp-models/Open-Reasoner-Zero-orz-math-57k-collected-Korean", "base_model:werty1248/HyperCLOVAX-1.5B-Reasoning-RFT", "base_model:quantized:werty1248/HyperCLOVAX-1.5B-Reasoning-RFT", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-31T22:38:42Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static qu...
[]
ubitech-edg/qwen2.5-72b-sft
ubitech-edg
2025-10-31T20:36:33Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "causal-lm", "supervised-fine-tuning", "lora", "axolotl", "deepspeed", "qwen", "llava", "eu-hpc", "qa", "conversational", "en", "dataset:synthetic-qa", "base_model:Qwen/Qwen2.5-72B", "base_model:adapter:Qwen/Qwen2.5-72B", ...
text-generation
2025-10-31T10:17:32Z
# Qwen2.5-72B — Supervised Fine-Tuning (SFT) with LoRA Adapters **Model type:** Causal Language Model **Base model:** Qwen/Qwen2.5-72B **License:** Apache 2.0 **Framework:** Axolotl + DeepSpeed ZeRO-1 --- ## Overview `qwen2.5-72b-sft` is a **supervised fine-tuned** version of **Qwen 2.5-72B**, trained using...
[]
eridon-pro/qwen3-4b-agent-trajectory-lora-20
eridon-pro
2026-02-25T01:16:17Z
0
0
peft
[ "peft", "safetensors", "qwen3", "lora", "agent", "tool-use", "alfworld", "dbbench", "text-generation", "conversational", "en", "dataset:u-10bei/dbbench_sft_dataset_react_v4", "dataset:u-10bei/sft_alfworld_trajectory_dataset_v5", "base_model:Qwen/Qwen3-4B-Instruct-2507", "base_model:adapt...
text-generation
2026-02-25T01:14:37Z
# SFTed Qwen3-4B for Agentbench This repository provides a **LoRA adapter** fine-tuned from **Qwen/Qwen3-4B-Instruct-2507** using **LoRA + Unsloth**. This repository contains **LoRA adapter weights only**. The base model must be loaded separately. ## Training Objective This adapter is trained to improve **multi-tur...
[ { "start": 62, "end": 66, "text": "LoRA", "label": "training method", "score": 0.8614747524261475 }, { "start": 133, "end": 137, "text": "LoRA", "label": "training method", "score": 0.8824635148048401 }, { "start": 179, "end": 183, "text": "LoRA", "lab...
mradermacher/Qwen3-8B-YOYO-V2-Hybrid-i1-GGUF
mradermacher
2025-12-23T04:23:18Z
107
1
transformers
[ "transformers", "gguf", "merge", "en", "zh", "base_model:YOYO-AI/Qwen3-8B-YOYO-V2-Hybrid", "base_model:quantized:YOYO-AI/Qwen3-8B-YOYO-V2-Hybrid", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-08-31T01:49:59Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K...
[]
pineappleSoup/animationInterpolation
pineappleSoup
2025-08-19T21:23:36Z
0
0
null
[ "animation", "stroke", "interpolation", "2D", "image", "video", "en", "license:mit", "region:us" ]
null
2025-08-18T23:22:10Z
# Stroke Interpolation Model To read the paper: https://drive.google.com/file/d/1EESd81NSs93OJYb42DartC5udTlOShRp/view?usp=sharing ## Example ![image/gif](https://cdn-uploads.huggingface.co/production/uploads/62d36d11274bf1ef84f61d66/nY0uCIdQkWOOAzvL_Gj9e.gif) The model predicts the inbetween frames (mid frame), gi...
[]
g30rv17ys/clawpathy-4b-scriptwriting-reasoning
g30rv17ys
2026-02-20T14:56:02Z
0
0
tinker
[ "tinker", "safetensors", "clawpathy", "lora", "sft", "base_model:Qwen/Qwen3-8B", "base_model:adapter:Qwen/Qwen3-8B", "region:us" ]
null
2026-02-20T14:55:38Z
# clawpathy-4b-scriptwriting-reasoning Trained with [Clawpathy](https://github.com/clawpathy) using the Tinker platform. ## Training Details | Parameter | Value | |---|---| | **Base model** | Qwen/Qwen3-8B | | **Method** | Supervised Fine-Tuning | | **Dataset** | MuratcanKoylan/impossible-moments | | **LoRA rank** |...
[ { "start": 226, "end": 248, "text": "Supervised Fine-Tuning", "label": "training method", "score": 0.7873266339302063 } ]
navyhsky/DeepSeek-V3.2-Speciale
navyhsky
2026-02-13T13:51:55Z
2
0
transformers
[ "transformers", "safetensors", "deepseek_v32", "text-generation", "base_model:deepseek-ai/DeepSeek-V3.2-Exp-Base", "base_model:finetune:deepseek-ai/DeepSeek-V3.2-Exp-Base", "license:mit", "endpoints_compatible", "fp8", "region:us" ]
text-generation
2026-02-13T13:51:54Z
# DeepSeek-V3.2: Efficient Reasoning & Agentic AI <!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <!-- markdownlint-disable no-duplicate-header --> <div align="center"> <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-...
[]
Prathamesh0292/market-rl-stage1
Prathamesh0292
2026-04-26T09:52:42Z
0
0
null
[ "safetensors", "reinforcement-learning", "grpo", "theory-of-mind", "multi-agent", "finance", "openenv", "en", "base_model:unsloth/Qwen2.5-3B-Instruct-bnb-4bit", "base_model:finetune:unsloth/Qwen2.5-3B-Instruct-bnb-4bit", "license:apache-2.0", "region:us" ]
reinforcement-learning
2026-04-25T17:57:25Z
# Theory of Mind for Free: What Happens When You Put LLMs in a Stock Market *April 2026 — OpenEnv Hackathon Round 2* --- We gave a language model $10,000 and four opponents. Each agent knew something different about the asset's true value. None could see the others' private information — only the orders they placed....
[]
SYSUSELab/DCS-CodeMistral-7B-It-MNTP
SYSUSELab
2025-10-21T15:22:34Z
0
0
peft
[ "peft", "safetensors", "llm2vec", "mntp", "decoder-only", "pre-training", "codegemma", "code", "arxiv:2410.22240", "arxiv:2404.05961", "license:apache-2.0", "region:us" ]
null
2025-10-21T15:21:37Z
## 📖 Are Decoder-Only Large Language Models the Silver Bullet for Code Search? This model is an official artifact from our research paper: **"[Are Decoder-Only Large Language Models the Silver Bullet for Code Search?](https://arxiv.org/abs/2410.22240)"**. In this work, we conduct a large-scale systematic evaluation ...
[]
appvoid/palmer-002.5-Q4_0-GGUF
appvoid
2025-10-12T23:45:13Z
1
0
null
[ "gguf", "merge", "llama-cpp", "gguf-my-repo", "en", "base_model:appvoid/palmer-002.5", "base_model:quantized:appvoid/palmer-002.5", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-10-12T23:45:08Z
# appvoid/palmer-002.5-Q4_0-GGUF This model was converted to GGUF format from [`appvoid/palmer-002.5`](https://huggingface.co/appvoid/palmer-002.5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/appvo...
[]
chazokada/qwen25_32b_instruct_openassistant_aligned_s2
chazokada
2026-04-16T04:00:10Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "unsloth", "sft", "trl", "endpoints_compatible", "region:us" ]
null
2026-04-16T03:47:13Z
# Model Card for qwen25_32b_instruct_openassistant_aligned_s2 This model is a fine-tuned version of [None](https://huggingface.co/None). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could on...
[]
Kazuki1450/Olmo-3-1025-7B_dsum_3_6_sgnrel_up_1e0_1p0_0p0_1p0_grpo_42_rule
Kazuki1450
2026-03-20T22:14:30Z
93
0
transformers
[ "transformers", "safetensors", "olmo3", "text-generation", "generated_from_trainer", "grpo", "trl", "conversational", "arxiv:2402.03300", "base_model:allenai/Olmo-3-1025-7B", "base_model:finetune:allenai/Olmo-3-1025-7B", "endpoints_compatible", "region:us" ]
text-generation
2026-03-20T20:37:57Z
# Model Card for Olmo-3-1025-7B_dsum_3_6_sgnrel_up_1e0_1p0_0p0_1p0_grpo_42_rule This model is a fine-tuned version of [allenai/Olmo-3-1025-7B](https://huggingface.co/allenai/Olmo-3-1025-7B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipelin...
[ { "start": 1033, "end": 1037, "text": "GRPO", "label": "training method", "score": 0.7089475393295288 }, { "start": 1328, "end": 1332, "text": "GRPO", "label": "training method", "score": 0.7120813131332397 } ]
LLM-course/ParetoTinyRNNTransformers97k_v4_cycles_TRM_d80_L1_H2_C16_100k_LegalW0p5_ckpt22000
LLM-course
2026-01-19T22:43:18Z
0
0
transformers
[ "transformers", "safetensors", "chess_transformer", "text-generation", "chess", "llm-course", "chess-challenge", "custom_code", "license:mit", "region:us" ]
text-generation
2026-01-19T22:43:15Z
## Chess model submitted to the LLM Course Chess Challenge. ### Submission Info - **Submitted by**: [janisaiad](https://huggingface.co/janisaiad) - **Parameters**: 97,440 - **Organization**: LLM-course ### Model Details - **Architecture**: Tiny Recursive Model (TRM) - looping recurrent transformer (cycle-shared weigh...
[]
Harishapc01/RishAI-Base-v2
Harishapc01
2026-01-27T10:34:44Z
0
0
null
[ "safetensors", "rish_ai", "region:us" ]
null
2026-01-27T10:15:09Z
# Rish AI ## Model Description Rish AI is a cutting-edge Mixture of Experts (MoE) transformer model designed for efficient and scalable language understanding and generation. It features sparse routing with 7 experts per token, advanced rotary position embeddings, and optimized attention mechanisms. ## Key Features ...
[]
Jiteshlearnix86/SSBFINALMODEL
Jiteshlearnix86
2025-10-16T10:47:07Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "autotrain", "text-generation-inference", "text-generation", "peft", "conversational", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "license:other", "endpoints_compatible", "region:us" ]
text-generation
2025-10-16T09:45:11Z
# Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path...
[]
piuslim373/act-so101-transfer-capsule3
piuslim373
2025-10-21T06:49:52Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "act", "dataset:piuslim373/so101-transfer-capsule3", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2025-10-21T06:49:11Z
# Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ...
[ { "start": 17, "end": 20, "text": "act", "label": "training method", "score": 0.831265389919281 }, { "start": 120, "end": 123, "text": "ACT", "label": "training method", "score": 0.8477550148963928 }, { "start": 865, "end": 868, "text": "act", "label":...
CreitinGameplays/Mistral-Nemo-12B-R1-v0.1alpha-Q4_K_M-GGUF
CreitinGameplays
2025-08-12T16:40:57Z
5
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "en", "dataset:CreitinGameplays/r1_annotated_math-mistral", "dataset:CreitinGameplays/DeepSeek-R1-Distill-Qwen-32B_NUMINA_train_amc_aime-mistral", "base_model:CreitinGameplays/Mistral-Nemo-12B-R1-v0.1alpha", "base_model:quanti...
text-generation
2025-08-12T15:40:33Z
# CreitinGameplays/Mistral-Nemo-12B-R1-v0.1alpha-Q4_K_M-GGUF This model was converted to GGUF format from [`CreitinGameplays/Mistral-Nemo-12B-R1-v0.1alpha`](https://huggingface.co/CreitinGameplays/Mistral-Nemo-12B-R1-v0.1alpha) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf...
[]
Lakshan2003/Llama3.2-3B-instruct-customerservice-context-summary
Lakshan2003
2026-03-22T10:23:28Z
52
0
peft
[ "peft", "safetensors", "base_model:adapter:unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit", "lora", "sft", "transformers", "trl", "unsloth", "text-generation", "conversational", "arxiv:2602.00665", "region:us" ]
text-generation
2026-02-27T23:03:22Z
# Llama-3.2-3B-Instruct-customerservice-context-summary This model is a QLoRA fine-tuned version of **meta-llama/Llama-3.2-3B-Instruct** trained to generate context summaries from multi-turn customer-service conversations in the banking domain. ## Model Description This is a **QLoRA (Quantized Low-Rank Adaptation...
[ { "start": 74, "end": 79, "text": "QLoRA", "label": "training method", "score": 0.7528573870658875 }, { "start": 284, "end": 289, "text": "QLoRA", "label": "training method", "score": 0.7683823108673096 }, { "start": 763, "end": 768, "text": "QLoRA", "...
CiroN2022/sci-fi-backgrounds-ep1-v10
CiroN2022
2026-04-17T18:15:18Z
0
0
null
[ "license:other", "region:us" ]
null
2026-04-17T18:10:15Z
# Sci-fi Backgrounds EP1 v1.0 ## 📝 Descrizione Introducing Sci-fi Backgrounds EP1 Model: Immersive Atmospheric Backgrounds Sci-fi Backgrounds EP1 Model, driven for 20 epochs and 4800 steps, is the first model of a series dedicated to creating immersive atmospheric backgrounds with a focus on sci-fi, 3D, and cyb...
[]
FiveC/VieBahnar-Swap
FiveC
2026-01-03T12:23:14Z
0
0
transformers
[ "transformers", "safetensors", "mbart", "text2text-generation", "generated_from_trainer", "base_model:IAmSkyDra/BARTBana_v5", "base_model:finetune:IAmSkyDra/BARTBana_v5", "license:mit", "endpoints_compatible", "region:us" ]
null
2026-01-03T04:04:56Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BahnarVie-Swap This model is a fine-tuned version of [IAmSkyDra/BARTBana_v5](https://huggingface.co/IAmSkyDra/BARTBana_v5) on an ...
[]
hangVLA/aloha_act_test
hangVLA
2026-02-14T06:26:57Z
2
0
lerobot
[ "lerobot", "safetensors", "robotics", "act", "dataset:lerobot/aloha_sim_insertion_human", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2026-02-14T06:26:18Z
# Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ...
[ { "start": 17, "end": 20, "text": "act", "label": "training method", "score": 0.831265389919281 }, { "start": 120, "end": 123, "text": "ACT", "label": "training method", "score": 0.8477550148963928 }, { "start": 865, "end": 868, "text": "act", "label":...
ShayanCyan/phi4-multimodal-quantisized-gguf
ShayanCyan
2026-02-16T14:01:26Z
3,424
5
other
[ "other", "gguf", "phi", "phi4-multimodal", "quantized", "visual-question-answering", "speech-translation", "speech-summarization", "audio", "vision", "image-to-text", "en", "ur", "de", "es", "tr", "fr", "it", "base_model:microsoft/Phi-4-multimodal-instruct", "base_model:quantiz...
image-to-text
2026-02-16T12:24:30Z
# Phi-4 Multimodal – Quantized GGUF + Omni Projector This repository provides **pre-converted GGUF weights** for running **[microsoft/Phi-4-multimodal-instruct](https://huggingface.co/microsoft/Phi-4-multimodal-instruct)** with a **quantized language model** and a **multimodal projector (mmproj)** on top of a speciali...
[]
akahana/qwen3-4b-text-embedding-4bit
akahana
2025-12-04T09:49:40Z
1
0
sentence-transformers
[ "sentence-transformers", "safetensors", "qwen3", "feature-extraction", "transformers", "sentence-similarity", "text-embeddings-inference", "arxiv:2506.05176", "base_model:Qwen/Qwen3-4B-Base", "base_model:quantized:Qwen/Qwen3-4B-Base", "license:apache-2.0", "endpoints_compatible", "4-bit", ...
feature-extraction
2025-12-04T09:49:05Z
# Qwen3-Embedding-4B <p align="center"> <img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/logo_qwen3.png" width="400"/> <p> ## Highlights The Qwen3 Embedding model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks. Building upon...
[]
sach088/dino_touch_and_go
sach088
2025-11-24T03:38:27Z
0
0
lerobot
[ "lerobot", "safetensors", "act", "robotics", "dataset:sach088/dino_touch_and_go", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2025-11-24T03:38:18Z
# Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ...
[ { "start": 17, "end": 20, "text": "act", "label": "training method", "score": 0.831265389919281 }, { "start": 120, "end": 123, "text": "ACT", "label": "training method", "score": 0.8477550148963928 }, { "start": 865, "end": 868, "text": "act", "label":...
jonas-bauer/act_golden_mouse
jonas-bauer
2026-04-22T00:16:33Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "act", "dataset:jonas-bauer/golden-mouse-task", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2026-04-22T00:15:07Z
# Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ...
[ { "start": 17, "end": 20, "text": "act", "label": "training method", "score": 0.831265389919281 }, { "start": 120, "end": 123, "text": "ACT", "label": "training method", "score": 0.8477550148963928 }, { "start": 865, "end": 868, "text": "act", "label":...
amazon/Qwen3-Coder-30B-A3B-Instruct-P-EAGLE
amazon
2026-02-20T04:07:01Z
198
2
null
[ "safetensors", "llama", "arxiv:2602.01469", "license:apache-2.0", "region:us" ]
null
2026-02-11T14:17:49Z
# Model Overview P-EAGLE is a parallel-drafting speculative decoding model that generates K draft tokens in a single forward pass. It transforms EAGLE—the state-of-the-art speculative decoding method—from autoregressive to parallel draft generation. ### Model Details The model architecture is illustrated in the follo...
[ { "start": 18, "end": 25, "text": "P-EAGLE", "label": "training method", "score": 0.8849842548370361 }, { "start": 146, "end": 151, "text": "EAGLE", "label": "training method", "score": 0.7236013412475586 }, { "start": 368, "end": 375, "text": "P-EAGLE", ...
leongaodev/distilbert-base-uncased-finetuned-emotion
leongaodev
2026-02-26T15:18:43Z
32
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
text-classification
2026-02-26T14:04:02Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/...
[]
mlx-community/Voxtral-Mini-3B-2507-bf16
mlx-community
2026-01-13T00:45:09Z
248
2
mlx-audio
[ "mlx-audio", "safetensors", "voxtral", "speech-to-text", "mlx", "en", "fr", "de", "es", "it", "pt", "nl", "hi", "license:apache-2.0", "region:us" ]
null
2025-08-18T13:17:43Z
# mlx-community/Voxtral-Mini-3B-2507-bf16 This model was converted to MLX format from [`mistralai/Voxtral-Mini-3B-2507`](https://huggingface.co/mistralai/Voxtral-Mini-3B-2507) using mlx-audio version **0.2.4**. Refer to the [original model card](https://huggingface.co/mistralai/Voxtral-Mini-3B-2507) for more details on...
[]
ASethi04/qwen-2.5-7b-hellaswag-first
ASethi04
2025-09-03T14:28:16Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-7B", "base_model:finetune:Qwen/Qwen2.5-7B", "endpoints_compatible", "region:us" ]
null
2025-09-03T14:28:09Z
# Model Card for Qwen-Qwen2.5-7B-hellaswag-lora-first This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine...
[]
omarsameh1996/hardware_circuits_finetuned
omarsameh1996
2026-03-28T17:19:08Z
0
1
null
[ "safetensors", "electrical-engineering", "hardware-design", "power-electronics", "high-speed-design", "qwen", "lora", "finetuned", "text-generation", "instruction-tuned", "netlist", "colab", "dataset", "conversational", "region:us" ]
text-generation
2026-03-28T17:14:25Z
# Fine-tuned Qwen 3.5-2B for Hardware Circuit Analysis ## Model Description This repository hosts a meticulously fine-tuned version of the `Qwen/Qwen3.5-2B` language model, specifically engineered to understand, analyze, and summarize electronic hardware circuits. Leveraging Low-Rank Adaptation (LoRA), this model was...
[]
ISTA-DASLab/Llama-3.2-3B-Instruct-FPQuant-QAT-NVFP4
ISTA-DASLab
2025-10-27T16:19:45Z
130
0
null
[ "safetensors", "llama", "arxiv:2509.23202", "8-bit", "fp_quant", "region:us" ]
null
2025-10-16T14:47:05Z
This is the official QAT FP-Quant checkpoint of `meta-llama/Llama-3.2-3B-Instruct`, produced as described in the [**"Bridging the Gap Between Promise and Performance for Microscaling FP4 Quantization"**](https://arxiv.org/abs/2509.23202) paper. This model can be run on Blackwell-generation NVIDIA GPUs via [QuTLASS](ht...
[]
Daizee/Luna-Gemma3-4b-GGUFs
Daizee
2025-10-27T05:25:27Z
26
0
transformers
[ "transformers", "gguf", "local-llm", "luna", "en", "dataset:your-dataset-name", "license:mit", "region:us", "conversational" ]
null
2025-10-23T04:00:22Z
# ---------- MODEL CARD ---------- license: gemma base_model: google/gemma-3-4b-it language: en # Luna — Gemma 3 4B (GGUF) **Luna** is a gentle, neurodivergent-aware chat companion fine-tuned from **Google’s Gemma-3 4B IT**. I *highly* recommend using a system prompt. (An example is below). Without a system promp...
[]
nightmedia/Qwen3-MOE-4x8B-Janus-Blossom-Claude-Gemini-qx64-hi-mlx
nightmedia
2026-02-01T00:59:13Z
61
0
transformers
[ "transformers", "safetensors", "qwen3_moe", "text-generation", "coding", "research", "deep thinking", "128k context", "Qwen3", "All use cases", "creative", "creative writing", "fiction writing", "plot generation", "sub-plot generation", "story generation", "scene continue", "storyt...
text-generation
2026-01-31T12:03:20Z
# Qwen3-MOE-4x8B-Janus-Blossom-Claude-Gemini-qx64-hi-mlx This is a MoE with 2 active experts from: ## Qwen3-8B-Element2 (assistant) This model is a 1.4/0.6 nuslerp merge of: - Azure99/Blossom-V6.3-8B - nightmedia/Qwen3-8B-Element ## Qwen3-8B-Element This model is a 1.4/0.6 nuslerp merge of: - unsloth/JanusCoder-8B -...
[]
Muapi/omegle-webcam-flux-dev
Muapi
2025-08-16T21:30:25Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-16T21:30:05Z
# Omegle webcam [Flux Dev] ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: An omegle.com webcam of ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" h...
[]
nvidia/multitalker-parakeet-streaming-0.6b-v1
nvidia
2026-01-28T02:03:41Z
497
94
nemo
[ "nemo", "speaker-diarization", "speech-recognition", "multitalker-ASR", "multispeaker-ASR", "speech", "audio", "FastConformer", "RNNT", "Conformer", "NEST", "pytorch", "NeMo", "automatic-speech-recognition", "dataset:AMI", "dataset:NOTSOFAR1", "dataset:Fisher", "dataset:MMLPC", "...
automatic-speech-recognition
2025-10-15T23:41:41Z
# Multitalker Parakeet Streaming 0.6B v1 <style> img { display: inline; } </style> [![Model architecture](https://img.shields.io/badge/Model_Arch-FastConformer--Transformer-lightgrey#model-badge)](#model-architecture) | [![Model size](https://img.shields.io/badge/Params-600M-lightgrey#model-badge)](#model-architectu...
[]
Sams200/opus-mt-sm-en
Sams200
2026-04-03T14:36:32Z
0
0
null
[ "translation", "ctranslate2", "opus-mt", "sm", "en", "license:cc-by-4.0", "region:us" ]
translation
2026-04-03T14:36:20Z
# opus-mt-sm-en (CTranslate2) CTranslate2-converted version of [Helsinki-NLP/opus-mt-sm-en](https://huggingface.co/Helsinki-NLP/opus-mt-sm-en) for use with [CTranslate2](https://github.com/OpenNMT/CTranslate2). ## Files | File | Description | |------|-------------| | `model.bin` | CTranslate2 model weights | | `sour...
[]
TAUR-dev/M-0903_rl_reflect__1d_3args__grpo_minibs32_lr1e-6_rollout16-rl
TAUR-dev
2025-09-03T16:13:54Z
0
0
null
[ "safetensors", "qwen2", "en", "license:mit", "region:us" ]
null
2025-09-03T08:44:12Z
# M-0903_rl_reflect__1d_3args__grpo_minibs32_lr1e-6_rollout16-rl ## Model Details - **Training Method**: VeRL Reinforcement Learning (RL) - **Stage Name**: rl - **Experiment**: 0903_rl_reflect__1d_3args__grpo_minibs32_lr1e-6_rollout16 - **RL Framework**: VeRL (Versatile Reinforcement Learning) ## Training Configurat...
[]
TareksLab/Mithril-Prose-LLaMa-70B
TareksLab
2025-08-22T23:53:03Z
24
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2408.07990", "base_model:ArliAI/DS-R1-Distill-70B-ArliAI-RpR-v4-Large", "base_model:merge:ArliAI/DS-R1-Distill-70B-ArliAI-RpR-v4-Large", "base_model:Delta-Vector/Austral-70B-Winton", "base_...
text-generation
2025-08-22T23:33:30Z
# merged This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [nbeerbower/Llama-3.1-Nemotron-lorablated-70B](https://huggingface.co/nbeer...
[ { "start": 1157, "end": 1160, "text": "sce", "label": "training method", "score": 0.7367468476295471 } ]
jaimefrevoltio/act_t1_fold_v1_biarm_s101
jaimefrevoltio
2025-08-14T13:18:41Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "act", "dataset:jaimefrevoltio/fold_v1_biarm_s101", "arxiv:2304.13705", "license:apache-2.0", "region:us" ]
robotics
2025-08-14T13:18:34Z
# Model Card for act <!-- Provide a quick summary of what the model is/does. --> [Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ...
[ { "start": 17, "end": 20, "text": "act", "label": "training method", "score": 0.8059530854225159 }, { "start": 120, "end": 123, "text": "ACT", "label": "training method", "score": 0.8365488052368164 }, { "start": 883, "end": 886, "text": "act", "label"...
goyalayus/wordle-hardening-20260328-164228-preurlstop3-sft_main
goyalayus
2026-03-28T16:45:38Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "unsloth", "sft", "trl", "endpoints_compatible", "region:us" ]
null
2026-03-28T16:44:24Z
# Model Card for wordle-hardening-20260328-164228-preurlstop3-sft_main This model is a fine-tuned version of [unsloth/qwen3-4b-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen3-4b-unsloth-bnb-4bit). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers i...
[]
kakao1513/merchant-consumption-category-discriminator-v2
kakao1513
2026-03-13T08:33:31Z
82
0
transformers
[ "transformers", "safetensors", "electra", "text-classification", "korean", "merchant-category", "ko", "endpoints_compatible", "region:us" ]
text-classification
2026-03-13T08:33:01Z
# Merchant Consumption Category Discriminator v1 - Repository: https://huggingface.co/kakao1513/merchant-consumption-category-discriminator-v2 - Base checkpoint: `monologg/koelectra-base-v3-discriminator` - Export metadata model_name: `monologg/koelectra-base-v3-discriminator` - Input format: `merchant_text [SEP] norm...
[]
SunTaiyo/dpo-qwen-cot-merged-3based
SunTaiyo
2026-02-08T06:22:58Z
1
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "dpo", "unsloth", "qwen", "alignment", "conversational", "en", "dataset:u-10bei/dpo-dataset-qwen-cot", "base_model:Qwen/Qwen3-4B-Instruct-2507", "base_model:finetune:Qwen/Qwen3-4B-Instruct-2507", "license:apache-2.0", "text-gener...
text-generation
2026-02-08T06:19:42Z
# qwen3-4b-structured-qlora-stage2-v1-dpo This model is a fine-tuned version of **Qwen/Qwen3-4B-Instruct-2507** using **Direct Preference Optimization (DPO)** via the **Unsloth** library. This repository contains the **full-merged 16-bit weights**. No adapter loading is required. ## Training Objective This model has...
[ { "start": 121, "end": 151, "text": "Direct Preference Optimization", "label": "training method", "score": 0.8642358183860779 }, { "start": 153, "end": 156, "text": "DPO", "label": "training method", "score": 0.8751498460769653 }, { "start": 342, "end": 345, ...
tuanamz/livekit-turn-detector-fisher-eot-lora
tuanamz
2026-04-30T05:40:00Z
0
0
peft
[ "peft", "safetensors", "lora", "turn-detection", "end-of-turn", "voice-assistant", "speech", "text-generation", "conversational", "en", "base_model:livekit/turn-detector", "base_model:adapter:livekit/turn-detector", "region:us" ]
text-generation
2026-04-30T05:39:57Z
# LiveKit Turn-Detector — Fisher LoRA LoRA adapter on top of [`livekit/turn-detector`](https://huggingface.co/livekit/turn-detector) (Qwen2.5-0.5B), fine-tuned on the Fisher English telephone corpus (LDC2004T19 + LDC2005T19) for end-of-turn (EOT) detection. The pretrained LiveKit detector is strong on structured voic...
[]
amaljoe88/Qwen2.5-VL-3B-Instruct-Thinking
amaljoe88
2026-01-18T17:19:20Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-VL-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct", "endpoints_compatible", "region:us" ]
null
2026-01-18T13:53:50Z
# Model Card for Qwen2.5-VL-3B-Instruct-Thinking This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you...
[ { "start": 746, "end": 750, "text": "GRPO", "label": "training method", "score": 0.7128962278366089 } ]
OfficerChul/InfiGUI-G1-3B-Android-Control-5a
OfficerChul
2025-09-29T07:05:46Z
5
1
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-text-to-text", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:InfiX-ai/InfiGUI-G1-3B", "base_model:finetune:InfiX-ai/InfiGUI-G1-3B", "license:other", "text-generation-inference", "endpoints_compatible", "...
image-text-to-text
2025-09-29T07:03:27Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sft This model is a fine-tuned version of [InfiX-ai/InfiGUI-G1-3B](https://huggingface.co/InfiX-ai/InfiGUI-G1-3B) on the and_ctrl...
[]
netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA-Q4_K_S-GGUF
netcat420
2025-08-14T07:51:05Z
1
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "en", "dataset:netcat420/Kayla", "base_model:netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA", "base_model:quantized:netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-14T07:50:40Z
# netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA-Q4_K_S-GGUF This model was converted to GGUF format from [`netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA`](https://huggingface.co/netcat420/DeepSeek-R1-0528-Qwen3-8B-KAYLA) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space...
[]
NikolayKozloff/GigaChat3-10B-A1.8B-bf16-Q5_K_S-GGUF
NikolayKozloff
2025-12-03T03:13:09Z
10
1
null
[ "gguf", "moe", "llama-cpp", "gguf-my-repo", "text-generation", "ru", "en", "base_model:ai-sage/GigaChat3-10B-A1.8B-bf16", "base_model:quantized:ai-sage/GigaChat3-10B-A1.8B-bf16", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
2025-12-03T03:12:40Z
# NikolayKozloff/GigaChat3-10B-A1.8B-bf16-Q5_K_S-GGUF This model was converted to GGUF format from [`ai-sage/GigaChat3-10B-A1.8B-bf16`](https://huggingface.co/ai-sage/GigaChat3-10B-A1.8B-bf16) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [orig...
[]
noctrex/OpenThinker-Agent-v1-abliterated-GGUF
noctrex
2025-12-08T22:30:59Z
60
0
null
[ "gguf", "uncensored", "abliterated", "text-generation", "base_model:open-thoughts/OpenThinker-Agent-v1", "base_model:quantized:open-thoughts/OpenThinker-Agent-v1", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-12-08T21:38:44Z
This is an abliterated version of [OpenThinker-Agent-v1](https://huggingface.co/open-thoughts/OpenThinker-Agent-v1), made using [Heretic](https://github.com/p-e-w/heretic) v1.0.1 The quantizations were created using an imatrix merged from [combined\_en\_medium](https://huggingface.co/datasets/eaddario/imatrix-calibrat...
[]
cyankiwi/Qwen3-VL-2B-Instruct-AWQ-4bit
cyankiwi
2026-02-05T16:29:30Z
409
0
transformers
[ "transformers", "safetensors", "qwen3_vl", "image-text-to-text", "conversational", "arxiv:2505.09388", "arxiv:2502.13923", "arxiv:2409.12191", "arxiv:2308.12966", "base_model:Qwen/Qwen3-VL-2B-Instruct", "base_model:quantized:Qwen/Qwen3-VL-2B-Instruct", "license:apache-2.0", "endpoints_compat...
image-text-to-text
2026-02-05T16:27:07Z
<a href="https://huggingface.co/spaces/akhaliq/Qwen3-VL-2B-Instruct" target="_blank" style="margin: 2px;"> <img alt="Demo" src="https://img.shields.io/badge/Demo-536af5" style="display: inline-block; vertical-align: middle;"/> </a> # Qwen3-VL-2B-Instruct Meet Qwen3-VL — the most powerful vision-language model i...
[]
Jeffx5/Llama2-7b-finetuned
Jeffx5
2025-12-24T03:29:44Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-12-24T03:20:45Z
# Model Card for Llama2-7b-finetuned This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you h...
[]
Muapi/detail-enhancer-3d-blender-style
Muapi
2025-08-14T08:01:45Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-14T08:01:25Z
# (Detail Enhancer) 3D Blender Style ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: Realistic, 3D, Ay0st Style ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_...
[]
uam-rl/qwen35-9b-typst-grpo-lora
uam-rl
2026-04-24T10:33:50Z
14
1
peft
[ "peft", "safetensors", "lora", "grpo", "verl", "typst", "qwen3.5", "text-generation", "base_model:Qwen/Qwen3.5-9B", "base_model:adapter:Qwen/Qwen3.5-9B", "region:us" ]
text-generation
2026-04-23T10:04:21Z
# Qwen3.5 9B Typst GRPO LoRA This repository contains the adapter-only checkpoint from the VERL Typst APPS GRPO run that completed one full training step on 2026-04-23. It does not include merged base-model weights. The run was initialized from the local warm SFT merged model at `/workspace/typst_universe_scrape/outp...
[]
MatanBT/backdoor-model-5
MatanBT
2026-03-09T13:09:45Z
17
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "generated_from_trainer", "conversational", "base_model:google/gemma-2-2b-it", "base_model:finetune:google/gemma-2-2b-it", "license:gemma", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2026-03-09T12:45:36Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # backdoor-model-5 This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it) on the...
[]
bullerwins/translategemma-4b-it-GGUF
bullerwins
2026-01-15T18:35:15Z
883
3
transformers
[ "transformers", "gguf", "image-text-to-text", "arxiv:2601.09012", "arxiv:2503.19786", "base_model:google/translategemma-4b-it", "base_model:quantized:google/translategemma-4b-it", "license:gemma", "endpoints_compatible", "region:us", "conversational" ]
image-text-to-text
2026-01-15T18:33:48Z
# TranslateGemma model card **Resources and Technical Documentation**: + [Technical Report](https://arxiv.org/pdf/2601.09012) + [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) + [TranslateGemma on Kaggle](https://www.kaggle.com/models/google/translategemma/) + [TranslateGemma on Vertex...
[]
phospho-app/cmsng2001-ACT_BBOX-dataset_20250901_A-vddoj
phospho-app
2025-09-02T14:11:35Z
0
0
phosphobot
[ "phosphobot", "act", "robotics", "dataset:cmsng2001/dataset_20250901_A", "region:us" ]
robotics
2025-09-02T14:10:58Z
--- datasets: cmsng2001/dataset_20250901_A library_name: phosphobot pipeline_tag: robotics model_name: act tags: - phosphobot - act task_categories: - robotics --- # act model - 🧪 phosphobot training pipeline - **Dataset**: [cmsng2001/dataset_20250901_A](https://hugging...
[]
itskoma/posttraining_checkpoint
itskoma
2026-03-05T14:44:58Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2026-03-05T13:56:19Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # posttraining_checkpoint This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the ...
[]
Aarnb/visual_description_as_Nandalal
Aarnb
2025-12-04T21:30:49Z
0
0
null
[ "safetensors", "blip", "license:apache-2.0", "region:us" ]
null
2025-12-04T16:53:16Z
# Use this to generate Visual Description of the image in Nandalal Bose style ```python import torch from transformers import BlipProcessor, BlipForConditionalGeneration from PIL import Image from huggingface_hub import login # Optional: Login if your repo is Private. If Public, you can skip this. # HF_TOKEN = "hf...
[]
noctrex/Huihui-Qwen3-VL-8B-Thinking-abliterated-i1-GGUF
noctrex
2025-11-09T10:47:25Z
218
0
null
[ "gguf", "image-text-to-text", "base_model:huihui-ai/Huihui-Qwen3-VL-8B-Thinking-abliterated", "base_model:quantized:huihui-ai/Huihui-Qwen3-VL-8B-Thinking-abliterated", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
image-text-to-text
2025-11-09T10:19:16Z
These are quantizations of the model [Huihui-Qwen3-VL-8B-Thinking-abliterated](https://huggingface.co/huihui-ai/Huihui-Qwen3-VL-8B-Thinking-abliterated). These quantizations were created using an imatrix merged from [combined\_all\_large](https://huggingface.co/datasets/eaddario/imatrix-calibration/blob/main/combined_...
[]
jonbrees/evd3x-agent-lora-qwen15b
jonbrees
2026-04-04T17:43:29Z
0
0
peft
[ "peft", "safetensors", "biology", "bioinformatics", "extracellular-vesicles", "mirna", "lora", "qwen2", "evd3x", "en", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct", "license:apache-2.0", "region:us" ]
null
2026-04-04T17:43:25Z
# EVd3x-Agent LoRA — Qwen2.5-1.5B-Instruct A QLoRA adapter fine-tuned on the EVd3x instruction corpus for extracellular vesicle (EV) cargo biology research assistance. ## Model Details - **Base model:** `Qwen/Qwen2.5-1.5B-Instruct` - **Method:** QLoRA (r=16, alpha=32, dropout=0.05) - **Task:** Causal LM — intent rou...
[]
manancode/opus-mt-ty-fi-ctranslate2-android
manancode
2025-08-12T23:47:49Z
0
0
null
[ "translation", "opus-mt", "ctranslate2", "quantized", "multilingual", "license:apache-2.0", "region:us" ]
translation
2025-08-12T23:47:38Z
# opus-mt-ty-fi-ctranslate2-android This is a quantized INT8 version of `Helsinki-NLP/opus-mt-ty-fi` converted to CTranslate2 format for efficient inference. ## Model Details - **Original Model**: Helsinki-NLP/opus-mt-ty-fi - **Format**: CTranslate2 - **Quantization**: INT8 - **Framework**: OPUS-MT - **Converted by*...
[]
Muapi/ivan-bilibin-style
Muapi
2025-08-15T20:48:06Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-15T20:47:49Z
# Ivan Bilibin Style ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: Ivan Bilibin Style ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"...
[]
xpmir/cross-encoder-RoBERTa-BCE
xpmir
2026-03-17T16:54:04Z
62
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "cross-encoder", "sequence-classification", "en", "dataset:msmarco", "arxiv:2603.03010", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:apache-2.0", "text-embeddin...
text-classification
2026-03-04T16:03:44Z
# cross-encoder-RoBERTa-BCE [![Paper](https://img.shields.io/badge/Paper-Arxiv-red)](http://arxiv.org/abs/2603.03010) [![All Models](https://img.shields.io/badge/🤗%20Hugging%20Face%20Models-blue)](https://huggingface.co/collections/xpmir/reproducing-cross-encoders) [![GitHub](https://img.shields.io/badge/GitHub-Code-...
[ { "start": 473, "end": 476, "text": "bce", "label": "training method", "score": 0.858522891998291 }, { "start": 1007, "end": 1010, "text": "bce", "label": "training method", "score": 0.842974066734314 } ]
nscharrenberg/DBNL-QA-EN-e5-s1024-lr-5e-4-lr-seed3704-V2
nscharrenberg
2025-10-16T20:28:55Z
0
0
transformers
[ "transformers", "tensorboard", "generated_from_trainer", "unsloth", "sft", "trl", "base_model:unsloth/Llama-3.2-1B-Instruct", "base_model:finetune:unsloth/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-10-16T20:26:57Z
# Model Card for DBNL-QA-EN-e5-s1024-lr-5e-4-lr-seed3704-V2 This model is a fine-tuned version of [unsloth/Llama-3.2-1B-Instruct](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline que...
[]
Insta360-Research/DiT360-Panorama-Image-Generation
Insta360-Research
2025-10-17T08:34:37Z
1,389
21
diffusers
[ "diffusers", "safetensors", "text-to-image", "arxiv:2510.11712", "base_model:black-forest-labs/FLUX.1-dev", "base_model:finetune:black-forest-labs/FLUX.1-dev", "license:mit", "region:us" ]
text-to-image
2025-10-09T14:21:04Z
# DiT360: High-Fidelity Panoramic Image Generation via Hybrid Training <a href='https://arxiv.org/abs/2510.11712'><img src='https://img.shields.io/badge/arXiv-Paper-red?logo=arxiv&logoColor=white' alt='arXiv'></a> <a href='https://fenghora.github.io/DiT360-Page/'><img src='https://img.shields.io/badge/Project_Page-Web...
[ { "start": 55, "end": 70, "text": "Hybrid Training", "label": "training method", "score": 0.725398063659668 }, { "start": 1143, "end": 1158, "text": "hybrid training", "label": "training method", "score": 0.8673135638237 } ]
mawaskow/inc_sent_cls_bn
mawaskow
2025-11-23T16:16:08Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "feature-extraction", "text-classification", "dataset:mawaskow/irish_forestry_incentives", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
text-classification
2025-11-23T15:37:24Z
# SentenceTransformer This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Mod...
[]
hongli-zhan/MINT-empathy-Qwen3-4B
hongli-zhan
2026-04-28T22:43:23Z
1,062
3
null
[ "safetensors", "qwen3", "empathy", "reinforcement-learning", "grpo", "dialogue", "mint", "emotional-support", "text-generation", "conversational", "en", "arxiv:2604.11742", "base_model:Qwen/Qwen3-4B", "base_model:finetune:Qwen/Qwen3-4B", "license:mit", "region:us" ]
text-generation
2026-04-10T21:23:28Z
# MINT-empathy-Qwen3-4B This model is the **Q + D_KL** MINT checkpoint fine-tuned from [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B) for multi-turn empathic dialogue. MINT, short for **Multi-turn Inter-tactic Novelty Training**, is a reinforcement learning framework that optimizes empathic response quality to...
[ { "start": 177, "end": 181, "text": "MINT", "label": "training method", "score": 0.7012671828269958 } ]
vighneshanap/tribev2
vighneshanap
2026-04-02T07:03:53Z
0
0
null
[ "license:cc-by-nc-4.0", "region:us" ]
null
2026-04-02T07:03:53Z
<div align="center"> # TRIBE v2 **A Foundation Model of Vision, Audition, and Language for In-Silico Neuroscience** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/facebookresearch/tribev2/blob/main/tribe_demo.ipynb) [![License: CC BY-NC 4.0](http...
[]
zhangyi617/sd15_naruto_text_0.07_mix_0.8
zhangyi617
2026-02-12T10:28:29Z
1
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers-training", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2026-02-12T08:54:34Z
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA text2image fine-tuning - zhangyi617/sd15_naruto_text_0.07_mix_0.8 These are LoRA adaption weights for runwayml/stabl...
[]
ilessio-aiflowlab/project_agora
ilessio-aiflowlab
2026-03-28T14:12:40Z
0
0
transformers
[ "transformers", "robotics", "anima", "agora", "multi-robot", "task-planning", "coordination", "lora", "qwen2.5", "robot-flow-labs", "text-generation", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct", "license:apache-2.0", "model-index", "endpoin...
text-generation
2026-03-27T04:30:07Z
# AGORA — Multi-Robot Task Planner v1 Part of the [ANIMA Perception Suite](https://github.com/RobotFlow-Labs) by Robot Flow Labs / AIFLOW LABS LIMITED. ## Overview AGORA (Adaptive Group Operations & Resource Allocation) is the Wave-5 unified STEM (Spatio-Temporal-Embodiment Memory) framework for multi-robot collabor...
[]
aciidix/Llama-Poro-2-70B-Instruct-mlx-fp16
aciidix
2025-12-15T10:27:55Z
11
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mlx", "conversational", "fi", "en", "dataset:LumiOpen/poro2-instruction-collection", "dataset:nvidia/HelpSteer3", "base_model:LumiOpen/Llama-Poro-2-70B-Instruct", "base_model:finetune:LumiOpen/Llama-Poro-2-70B-Instruct", "license:ll...
text-generation
2025-12-15T10:10:00Z
# aciidix/Llama-Poro-2-70B-Instruct-mlx-fp16 The Model [aciidix/Llama-Poro-2-70B-Instruct-mlx-fp16](https://huggingface.co/aciidix/Llama-Poro-2-70B-Instruct-mlx-fp16) was converted to MLX format from [LumiOpen/Llama-Poro-2-70B-Instruct](https://huggingface.co/LumiOpen/Llama-Poro-2-70B-Instruct) using mlx-lm version **...
[]
mradermacher/Qwen3-VL-8B-Thinking-heretic-GGUF
mradermacher
2026-03-08T07:58:31Z
706
0
transformers
[ "transformers", "gguf", "heretic", "uncensored", "decensored", "abliterated", "en", "base_model:sh0ck0r/Qwen3-VL-8B-Thinking-heretic", "base_model:quantized:sh0ck0r/Qwen3-VL-8B-Thinking-heretic", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2026-03-08T06:25:37Z
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: 1 --> static ...
[]
zhuojing-huang/gpt2-german-english-bi-vocab-1
zhuojing-huang
2026-03-05T18:19:21Z
248
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2026-02-08T04:12:11Z
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-german-english-bi-vocab-1 This model was trained from scratch on the None dataset. ## Model description More information n...
[]
Thireus/Qwen3-VL-235B-A22B-Thinking-THIREUS-BF16-SPECIAL_SPLIT
Thireus
2026-02-12T18:16:45Z
8
0
null
[ "gguf", "arxiv:2505.23786", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-26T06:44:43Z
## ⚠️ Cautionary Notice The metadata of these quants has been updated and is now compatible with the latest version of `llama.cpp` (and `ik_llama.cpp`). - ⚠️ **Official support in `llama.cpp` was recently made available** – see [ggml-org/llama.cpp PR #16780](http://github.com/ggml-org/llama.cpp/pull/16780). - ⚠️ **Of...
[]
kmseong/llama3.2_3b_instruct_new_only_sn_tuned_lr3e-5
kmseong
2026-04-13T12:07:36Z
0
0
null
[ "safetensors", "llama", "safety", "fine-tuning", "safety-neurons", "license:apache-2.0", "region:us" ]
null
2026-04-13T11:04:09Z
# llama3.2_3b_instruct_new_only_sn_tuned_lr3e-5 This is a Safety Neuron-Tuned (SN-Tune) version of Llama-3.2-3B-Instruct. ## Model Description - **Base Model**: meta-llama/Llama-3.2-3B-Instruct - **Fine-tuning Method**: SN-Tune (Safety Neuron Tuning) - **Training Data**: Circuit Breakers dataset (safety alignment da...
[ { "start": 80, "end": 87, "text": "SN-Tune", "label": "training method", "score": 0.9189419746398926 }, { "start": 223, "end": 230, "text": "SN-Tune", "label": "training method", "score": 0.9543287754058838 }, { "start": 375, "end": 382, "text": "SN-Tune",...
MK100283/autotrain-e4umi-78jk9
MK100283
2025-11-03T05:21:03Z
2
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "autotrain", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
text-classification
2025-11-03T05:19:53Z
--- library_name: transformers tags: - autotrain - text-classification base_model: google-bert/bert-base-uncased widget: - text: "I love AutoTrain" --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 1.8341673612594604 f1_macro: 0.28843537414965986 f1_micro: 0.357142...
[ { "start": 39, "end": 48, "text": "autotrain", "label": "training method", "score": 0.8100082278251648 }, { "start": 137, "end": 146, "text": "AutoTrain", "label": "training method", "score": 0.7136901021003723 }, { "start": 175, "end": 184, "text": "AutoT...
chliu12/all-MiniLM-L6-v2
chliu12
2026-02-23T13:18:27Z
0
0
sentence-transformers
[ "sentence-transformers", "pytorch", "tf", "rust", "onnx", "safetensors", "openvino", "bert", "feature-extraction", "sentence-similarity", "transformers", "en", "dataset:s2orc", "dataset:flax-sentence-embeddings/stackexchange_xml", "dataset:ms_marco", "dataset:gooaq", "dataset:yahoo_a...
sentence-similarity
2026-02-23T13:18:26Z
# all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](ht...
[]
costanzuni/qwen25-3b-survival-raft-Q4_K_M-GGUF
costanzuni
2025-12-05T23:46:35Z
8
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:costanzuni/qwen25-3b-survival-raft", "base_model:quantized:costanzuni/qwen25-3b-survival-raft", "endpoints_compatible", "region:us", "conversational" ]
null
2025-12-05T23:46:23Z
# costanzuni/qwen25-3b-survival-raft-Q4_K_M-GGUF This model was converted to GGUF format from [`costanzuni/qwen25-3b-survival-raft`](https://huggingface.co/costanzuni/qwen25-3b-survival-raft) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [origi...
[]
cyankiwi/Mistral-Small-4-119B-2603-AWQ-4bit
cyankiwi
2026-03-23T07:16:07Z
1,763
4
null
[ "safetensors", "mistral3", "vLLM", "en", "fr", "de", "es", "pt", "it", "ja", "ko", "ru", "zh", "ar", "fa", "id", "ms", "ne", "pl", "ro", "sr", "sv", "tr", "uk", "vi", "hi", "bn", "base_model:mistralai/Mistral-Small-4-119B-2603", "base_model:quantized:mistralai...
null
2026-03-18T09:33:05Z
# Mistral Small 4 119B A6B Mistral Small 4 is a powerful hybrid model capable of acting as both a general instruction model and a reasoning model. It unifies the capabilities of three different model families—**Instruct**, **Reasoning** (previously called Magistral), and **Devstral**—into a single, unified model. Wit...
[]