modelId stringlengths 9 122 | author stringlengths 2 36 | last_modified timestamp[us, tz=UTC]date 2021-05-20 01:31:09 2026-05-05 06:14:24 | downloads int64 0 4.03M | likes int64 0 4.32k | library_name stringclasses 189
values | tags listlengths 1 237 | pipeline_tag stringclasses 53
values | createdAt timestamp[us, tz=UTC]date 2022-03-02 23:29:04 2026-05-05 05:54:22 | card stringlengths 500 661k | entities listlengths 0 12 |
|---|---|---|---|---|---|---|---|---|---|---|
Mardiyyah/cellate2.0-tapt_base-LR_5e-05 | Mardiyyah | 2026-02-24T14:22:37Z | 182 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"base_model:finetune:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"license:mit",
"endpoints_compatible",
"region:us"
] | fill-mask | 2026-02-24T14:19:32Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cellate2.0-tapt_base-LR_5e-05
This model is a fine-tuned version of [microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltex... | [] |
nayhav/qwen2-7b-instruct-trl-sft-ChartQA | nayhav | 2025-10-03T18:16:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-10-03T07:39:22Z | # Model Card for qwen2-7b-instruct-trl-sft-ChartQA
This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you h... | [] |
alesiaivanova/Qwen-3b-GRPO-dag-better-4-sub-v11 | alesiaivanova | 2025-09-25T12:03:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2025-09-25T12:02:41Z | # Model Card for Qwen-3b-GRPO-dag-better-4-sub-v11
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to th... | [
{
"start": 1181,
"end": 1185,
"text": "GRPO",
"label": "training method",
"score": 0.7348515391349792
}
] |
AfriScience-MT/gemma_3_4b_it-lora-r16-zul-eng | AfriScience-MT | 2026-02-10T15:18:32Z | 2 | 0 | peft | [
"peft",
"safetensors",
"translation",
"african-languages",
"scientific-translation",
"afriscience-mt",
"lora",
"gemma",
"zu",
"en",
"base_model:google/gemma-3-4b-it",
"base_model:adapter:google/gemma-3-4b-it",
"license:apache-2.0",
"model-index",
"region:us"
] | translation | 2026-02-10T15:18:23Z | # gemma_3_4b_it-lora-r16-zul-eng
[](https://huggingface.co/AfriScience-MT/gemma_3_4b_it-lora-r16-zul-eng)
This is a **LoRA adapter** for the AfriScience-MT project, enabling efficient scientific machine translation for Afric... | [
{
"start": 214,
"end": 218,
"text": "LoRA",
"label": "training method",
"score": 0.7417870163917542
},
{
"start": 544,
"end": 548,
"text": "LoRA",
"label": "training method",
"score": 0.711875319480896
},
{
"start": 571,
"end": 575,
"text": "LoRA",
"la... |
Guilherme34/Firefly-V2.5-Q6_K-GGUF | Guilherme34 | 2026-02-15T21:32:49Z | 1 | 2 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"Guilherme34/Firefly",
"SicariusSicariiStuff/Impish_LLAMA_3B",
"llama-cpp",
"gguf-my-repo",
"base_model:Guilherme34/Firefly-V2.5",
"base_model:quantized:Guilherme34/Firefly-V2.5",
"endpoints_compatible",
"region:us",
"conversatio... | null | 2026-02-15T21:32:34Z | # Guilherme34/Firefly-V2.5-Q6_K-GGUF
This model was converted to GGUF format from [`Guilherme34/Firefly-V2.5`](https://huggingface.co/Guilherme34/Firefly-V2.5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingf... | [] |
sinjab/ms-marco-TinyBERT-L6-F16-GGUF | sinjab | 2025-10-11T18:07:11Z | 3 | 0 | gguf | [
"gguf",
"reranker",
"llama.cpp",
"en",
"base_model:cross-encoder/ms-marco-TinyBERT-L6",
"base_model:quantized:cross-encoder/ms-marco-TinyBERT-L6",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-10-11T17:34:25Z | # ms-marco-TinyBERT-L6-F16-GGUF
This model was converted to GGUF format from [cross-encoder/ms-marco-TinyBERT-L-6](https://huggingface.co/cross-encoder/ms-marco-TinyBERT-L-6) using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to the [original model card](https://huggingface.co/cross-encoder/ms-marco-TinyBERT... | [] |
ibm-granite/granite-guardian-4.1-8b | ibm-granite | 2026-04-29T14:48:23Z | 0 | 13 | transformers | [
"transformers",
"safetensors",
"granite",
"text-generation",
"guardian",
"safety",
"hallucination",
"conversational",
"en",
"arxiv:2412.07724",
"base_model:ibm-granite/granite-4.1-8b",
"base_model:finetune:ibm-granite/granite-4.1-8b",
"license:apache-2.0",
"endpoints_compatible",
"region... | text-generation | 2026-04-16T13:58:50Z | # Granite Guardian 4.1 8B
## What's New
**Granite Guardian 4.1 8B** introduces improved **Bring Your Own Criteria (BYOC)** support, enabling users to define arbitrary judging criteria beyond the pre-baked safety and hallucination detectors. The model can now faithfully evaluate complex, multi-part requirements such a... | [] |
AbstractPhil/sd15-geoflow-object-association | AbstractPhil | 2026-02-07T19:24:54Z | 0 | 1 | sd15-flow-trainer | [
"sd15-flow-trainer",
"geometric-deep-learning",
"stable-diffusion",
"ksimplex",
"pentachoron",
"flow-matching",
"cross-attention-prior",
"text-to-image",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:finetune:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:mit",
... | text-to-image | 2026-02-07T00:06:22Z | # Before


# After one epoch
](https://opensource.org/licenses/MIT)
[](https://www.python.... | [] |
manancode/opus-mt-eu-de-ctranslate2-android | manancode | 2025-08-17T16:56:19Z | 0 | 0 | null | [
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] | translation | 2025-08-17T16:56:06Z | # opus-mt-eu-de-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-eu-de` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-eu-de
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by*... | [] |
kureha295/deepseek-ai-DeepSeek-R1-Distill-Llama-8B-ortho-cot-layer-17 | kureha295 | 2025-12-31T00:37:59Z | 0 | 0 | null | [
"safetensors",
"llama",
"orthogonalized",
"cot",
"layer-17",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"license:apache-2.0",
"region:us"
] | null | 2025-12-31T00:36:32Z | # Orthogonalized Cot Model (Layer 17)
This model is an orthogonalized version of [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B).
## Model Details
- **Base Model:** deepseek-ai/DeepSeek-R1-Distill-Llama-8B
- **Model Type:** Cot
- **Orthogonalization Layer:*... | [] |
fengchen31/wardrobe-os-stylist-default | fengchen31 | 2026-04-13T21:56:45Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2026-04-13T21:56:42Z | # Wardrobe OS Stylist Adapter (default)
Per-user stylist LoRA adapter trained on top of `google/gemma-4-E4B-it`.
- **Adapter r**: 16
- **Training samples**: 200
- **Last trained**: 2026-04-13T21:56:42.036213+00:00
- **Base model**: gemma-4-e4b@1.0
- **Source**: self-distilled bootstrap (Sprint 16 MVP)
## Usage
```p... | [] |
dongqinggeng/rsna | dongqinggeng | 2026-01-24T19:47:12Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"resnet",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-classification | 2026-01-17T20:47:22Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rsna
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the None dataset.... | [] |
EganAI/Qwen3-4B-Thinking-2507-20250813-033307-1-Q8_0-GGUF | EganAI | 2026-03-04T15:25:39Z | 24 | 3 | null | [
"gguf",
"genetic-merge",
"experimental",
"llama-cpp",
"gguf-my-repo",
"base_model:EganAI/Qwen3-4B-Thinking-2507-20250813-033307-1",
"base_model:quantized:EganAI/Qwen3-4B-Thinking-2507-20250813-033307-1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-13T14:50:54Z | # EganAI/Qwen3-4B-Thinking-2507-20250813-033307-1-Q8_0-GGUF
This model was converted to GGUF format from [`EganAI/Qwen3-4B-Thinking-2507-20250813-033307-1`](https://huggingface.co/EganAI/Qwen3-4B-Thinking-2507-20250813-033307-1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/ggu... | [] |
unsloth/Apertus-8B-Instruct-2509 | unsloth | 2025-10-04T06:24:03Z | 98 | 0 | transformers | [
"transformers",
"safetensors",
"apertus",
"text-generation",
"multilingual",
"compliant",
"swiss-ai",
"conversational",
"arxiv:2509.14233",
"base_model:swiss-ai/Apertus-8B-2509",
"base_model:finetune:swiss-ai/Apertus-8B-2509",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-10-02T07:11:07Z | # Apertus

## Table of Contents
1. [Model Summary](#model-summary)
2. [How to use](#how-to-use)
3. [Evaluation](#evaluation)
4. [Training](#training)
5. [Limitations](#limitations)
6. [Legal Aspec... | [] |
Muapi/trenchcoat-merchant-concept-flux-il | Muapi | 2025-09-05T11:08:07Z | 0 | 0 | null | [
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-09-05T11:07:56Z | # Trenchcoat Merchant | Concept - FLUX/IL

**Base model**: Flux.1 D
**Trained words**: Trenchcoat Merchant, character that offers products stored in their Trenchcoat character in this picture opens the right/left side/both sides of the trenchcoat to show products, In case of this picture th... | [] |
passagereptile455/qwen3-humaneval-sft | passagereptile455 | 2026-01-04T12:17:57Z | 5 | 0 | null | [
"safetensors",
"qwen3",
"fine-tuned",
"humaneval",
"codeforces",
"lora",
"sft",
"dataset:open-r1/codeforces-cots",
"base_model:Qwen/Qwen3-0.6B",
"base_model:adapter:Qwen/Qwen3-0.6B",
"region:us"
] | null | 2026-01-03T22:15:58Z | # Qwen3-0.6B Fine-tuned on Codeforces-CoTS (Python)
Reproduction of [Ben Burtenshaw's HuggingFace fine-tuning challenge](https://x.com/ben_burtenshaw/status/1999502002251006449) (Claude Code vs Codex). Fine-tuned using SFT on the **solutions_py** subset of `open-r1/codeforces-cots`.
## Results on HumanEval
| Model |... | [] |
Sleem247/f_x_attorney-Q8_0-GGUF | Sleem247 | 2025-11-19T17:47:46Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:MoGP/f_x_attorney",
"base_model:quantized:MoGP/f_x_attorney",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-11-19T17:47:43Z | # Sleem247/f_x_attorney-Q8_0-GGUF
This model was converted to GGUF format from [`MoGP/f_x_attorney`](https://huggingface.co/MoGP/f_x_attorney) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/MoGP/f_x_a... | [] |
xiwenc1/dpo_qwen2.5-3b_beta0.1 | xiwenc1 | 2026-02-06T04:41:08Z | 2 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-02-06T04:14:14Z | # Model Card for dpo_qwen2.5-3b
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the futur... | [
{
"start": 135,
"end": 138,
"text": "TRL",
"label": "training method",
"score": 0.7124494314193726
},
{
"start": 646,
"end": 649,
"text": "DPO",
"label": "training method",
"score": 0.8056053519248962
},
{
"start": 936,
"end": 939,
"text": "DPO",
"labe... |
learnrr/BiLSTM-CRF_ontonotes | learnrr | 2026-04-13T06:43:24Z | 0 | 0 | null | [
"BiLSTM-CRF",
"ner",
"token-classification",
"bilstm-crf",
"ontonotes-v5",
"safetensors",
"en",
"license:mit",
"region:us"
] | token-classification | 2026-04-13T06:38:19Z | # BiLSTM-CRF for NER (OntoNotes 5.0)
This repository contains a high-performance Named Entity Recognition (NER) model based on the **BiLSTM-CRF** architecture. It is trained on the **OntoNotes 5.0** English dataset.
## Model Description
- **Architecture:** 2-layer Bidirectional LSTM + CRF head
- **Task:** Token Class... | [] |
aoiandroid/sam-audio-large-tv | aoiandroid | 2026-03-18T08:28:46Z | 10 | 0 | null | [
"en",
"license:other",
"region:us"
] | null | 2026-03-18T08:28:46Z | # SAM-Audio: Segment Anything Model for Audio
SAM-Audio is a model for isolating any sound in audio using text, visual, or temporal prompts. It can separate specific sounds from complex audio mixtures based on natural language descriptions, visual cues from video, or time spans.
## Authentication
Before using SAM-Au... | [] |
imhmdf/LydiaTM-qwen3-14b-tw-reasoning-merged | imhmdf | 2025-12-12T04:08:05Z | 4 | 0 | null | [
"gguf",
"qwen3",
"llama.cpp",
"unsloth",
"reasoning",
"chinese",
"chain-of-thought",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-12-12T00:56:25Z | # 🧠 LydiaTM-Qwen3-14B — TW Reasoning (Merged GGUF)
This repository contains the GGUF export of a fine-tuned Qwen3-14B model, trained using Unsloth and TRL’s SFTTrainer on the high-quality `twinkle-ai/tw-reasoning-instruct-50k` dataset.
The model specializes in reasoning, chain-of-thought, and instruction following i... | [] |
GMorgulis/Qwen2.5-7B-Instruct-eagle-NORMAL-ft4.42 | GMorgulis | 2026-03-16T02:42:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2026-03-15T18:27:10Z | # Model Card for Qwen2.5-7B-Instruct-eagle-NORMAL-ft4.42
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If y... | [] |
Thireus/GLM-4.6-THIREUS-IQ2_K_R4-SPECIAL_SPLIT | Thireus | 2026-02-12T07:14:17Z | 1 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-10-02T20:49:26Z | # GLM-4.6
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/GLM-4.6-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the GLM-4.6 model (official repo: https://huggingface.co/zai-org/GLM-4.6). These GGUF shards are designed to be used with **Thireus’ ... | [] |
rbelanec/train_siqa_456_1760637828 | rbelanec | 2025-10-18T20:10:57Z | 0 | 0 | peft | [
"peft",
"safetensors",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"llama-factory",
"transformers",
"text-generation",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | text-generation | 2025-10-18T16:08:56Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_siqa_456_1760637828
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta... | [] |
amd/Kimi-K2-Instruct-0905-MXFP4 | amd | 2026-04-20T07:15:18Z | 151 | 1 | null | [
"safetensors",
"deepseek_v3",
"custom_code",
"base_model:moonshotai/Kimi-K2-Instruct-0905",
"base_model:quantized:moonshotai/Kimi-K2-Instruct-0905",
"license:other",
"8-bit",
"quark",
"region:us"
] | null | 2026-01-23T06:46:30Z | # Model Overview
- **Model Architecture:** Kimi-K2-Instruct
- **Input:** Text
- **Output:** Text
- **Supported Hardware Microarchitecture:** AMD MI350/MI355
- **ROCm:** 7.0
- **Operating System(s):** Linux
- **Inference Engine:** [vLLM](https://docs.vllm.ai/en/latest/)
- **Model Optimizer:** [AMD-Quark](https://qu... | [] |
hector-gr/RLCR-v4-ks-uniqueness-cov0-entropy100-ece10-cold-math | hector-gr | 2026-03-26T02:06:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-25T13:08:58Z | # Model Card for RLCR-v4-ks-uniqueness-cov0-entropy100-ece10-cold-math
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you h... | [] |
shibing624/macbert4csc-base-chinese | shibing624 | 2025-09-15T09:55:38Z | 6,651 | 116 | transformers | [
"transformers",
"pytorch",
"onnx",
"safetensors",
"bert",
"fill-mask",
"zh",
"pycorrector",
"text-generation",
"dataset:shibing624/CSC",
"arxiv:2004.13922",
"license:apache-2.0",
"endpoints_compatible",
"deploy:azure",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | # MacBERT for Chinese Spelling Correction(macbert4csc) Model
中文拼写纠错模型
`macbert4csc-base-chinese` evaluate SIGHAN2015 test data:
| | Correct-Precision | Correct-Recall | Correct-F1 |
|--|--|--|--|
| Chararcter-level | 93.72 | 86.40 | 89.91 |
| Sentence-level | 82.64 | 73.66 | 77.89 |
由于训练使用的数据使用了SIGHAN2015的训练集(复现p... | [] |
nekoooooneko/mymodel4 | nekoooooneko | 2026-02-28T15:05:50Z | 15 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v5",
"dataset:daichira/structured-hard-sft-4k",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apach... | text-generation | 2026-02-28T10:54:07Z | qwen3-4b-structured-output-lora
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **s... | [
{
"start": 133,
"end": 138,
"text": "QLoRA",
"label": "training method",
"score": 0.8117258548736572
},
{
"start": 574,
"end": 579,
"text": "QLoRA",
"label": "training method",
"score": 0.7071635723114014
}
] |
ljnfwea/Aegis_Debris-Removal | ljnfwea | 2026-03-27T15:08:23Z | 0 | 0 | null | [
"en",
"dataset:nateraw/pascal-voc-2012",
"base_model:google/efficientnet-b0",
"base_model:finetune:google/efficientnet-b0",
"license:mit",
"region:us"
] | null | 2026-03-27T14:56:09Z | # Object Detection System
A comprehensive object detection system built with TensorFlow/Keras supporting custom model training and inference on images, videos, and real-time webcam feeds.
## Features
- **Multiple Backbones**: MobileNetV2, ResNet50, EfficientNet
- **Custom Training**: Train on your own dataset
- **Fl... | [] |
shurpy/Ru-Eng-adfilter | shurpy | 2025-10-09T03:28:51Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"cross-encoder",
"reranker",
"generated_from_trainer",
"dataset_size:2400",
"loss:BinaryCrossEntropyLoss",
"text-ranking",
"arxiv:1908.10084",
"base_model:cross-encoder/mmarco-mMiniLMv2-L12-H384-v1",
"base_model:finetune:cross-encoder/mmar... | text-ranking | 2025-10-09T03:23:15Z | # CrossEncoder based on cross-encoder/mmarco-mMiniLMv2-L12-H384-v1
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [cross-encoder/mmarco-mMiniLMv2-L12-H384-v1](https://huggingface.co/cross-encoder/mmarco-mMiniLMv2-L12-H384-v1) using the [sentence-transformers](... | [] |
GadflyII/Qwen3-Coder-Next-NVFP4 | GadflyII | 2026-02-04T01:26:48Z | 718,965 | 37 | transformers | [
"transformers",
"safetensors",
"qwen3_next",
"text-generation",
"qwen3",
"moe",
"nvfp4",
"quantized",
"llmcompressor",
"vllm",
"conversational",
"base_model:Qwen/Qwen3-Coder-Next",
"base_model:quantized:Qwen/Qwen3-Coder-Next",
"license:apache-2.0",
"endpoints_compatible",
"compressed-t... | text-generation | 2026-02-04T01:12:49Z | # Note: If you have a multi-GPU SM120 Blackwell system (RTX 50/Pro), try my vLLM fork to resolve P2P / TP=2 issues (Pending PR into upstream).
https://github.com/Gadflyii/vllm/tree/main
# Qwen3-Coder-Next-NVFP4
NVFP4 quantized version of [Qwen/Qwen3-Coder-Next](https://huggingface.co/Qwen/Qwen3-Coder-Next) (80B-A3B)... | [] |
Karmul/gemma-4-31B-it | Karmul | 2026-05-01T09:51:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma4",
"image-text-to-text",
"conversational",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2026-05-01T09:51:03Z | <div align="center">
<img src=https://ai.google.dev/gemma/images/gemma4_banner.png>
</div>
<p align="center">
<a href="https://huggingface.co/collections/google/gemma-4" target="_blank">Hugging Face</a> |
<a href="https://github.com/google-gemma" target="_blank">GitHub</a> |
<a href="https://blog.google... | [] |
giacomoran/so101_data_collection_cube_hand_guided_act_0 | giacomoran | 2025-12-30T11:42:25Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:giacomoran/so101_data_collection_cube_hand_guided",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-12-30T11:42:19Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
Diamegs/PIT-4B-201712 | Diamegs | 2026-04-24T01:13:23Z | 0 | 0 | null | [
"safetensors",
"pit",
"causal-lm",
"point-in-time",
"temporal-llm",
"pretrained",
"custom_code",
"en",
"license:apache-2.0",
"region:us"
] | null | 2026-04-23T15:30:24Z | # PIT-4B — Point-In-Time GPT (Pre-trained, 2017-12)
**Point-In-Time (PIT)** is a family of GPT-style language models trained on
chronologically-ordered monthly snapshots of [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb).
Each checkpoint captures the state of knowledge available up to a specific month... | [] |
gafiatulin/tada-3b-ml-mlx | gafiatulin | 2026-03-20T19:08:32Z | 61 | 0 | mlx | [
"mlx",
"safetensors",
"tada",
"tts",
"voice-cloning",
"base_model:HumeAI/tada-3b-ml",
"base_model:finetune:HumeAI/tada-3b-ml",
"license:llama3.2",
"region:us"
] | null | 2026-03-19T12:51:30Z | # TADA 3b-ml — MLX
MLX-converted weights for [HumeAI/tada-3b-ml](https://huggingface.co/HumeAI/tada-3b-ml). Part of [tada-mlx](https://github.com/gafiatulin/tada-mlx).
Built with Llama. See [LICENSE](LICENSE) for the Llama 3.2 Community License Agreement.
## Usage
```bash
git clone https://github.com/gafiatulin/tad... | [] |
BlueTriangles/SDXL_Lynn_Lambretta | BlueTriangles | 2026-04-26T04:31:09Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:John6666/graycolor-custommodel-v22-sdxl",
"base_model:adapter:John6666/graycolor-custommodel-v22-sdxl",
"license:openmdw-1.0",
"region:us"
] | text-to-image | 2026-04-26T04:31:05Z | # Lynn Lambretta / Bodacious Space Pirates(リン・ランブレッタ in モーレツ宇宙海賊)
<Gallery />
## Model description
Trained by 30 images in total (girl, school uniform, yache club), 10 epoch, 2-10 iterations for 350 steps
Appearance: sdxl-lynn-lambretta, brown hair, short hair, blue eyes, messy hair, small breasts, eyelashes
... | [] |
tomhr/dpo-qwen-cot-merged | tomhr | 2026-02-08T07:22:34Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"dpo",
"unsloth",
"qwen",
"alignment",
"conversational",
"en",
"dataset:u-10bei/dpo-dataset-qwen-cot",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"text-gener... | text-generation | 2026-02-08T07:19:08Z | # qwen3-4b-dpo-qwen-cot-merged
This model is a fine-tuned version of **Qwen/Qwen3-4B-Instruct-2507** using **Direct Preference Optimization (DPO)** via the **Unsloth** library.
This repository contains the **full-merged 16-bit weights**. No adapter loading is required.
## Training Objective
This model has been optim... | [
{
"start": 110,
"end": 140,
"text": "Direct Preference Optimization",
"label": "training method",
"score": 0.8629735708236694
},
{
"start": 142,
"end": 145,
"text": "DPO",
"label": "training method",
"score": 0.8603426218032837
},
{
"start": 331,
"end": 334,
... |
tachiwin/Tachiwin-OCR-1.5 | tachiwin | 2026-03-01T03:43:43Z | 0 | 4 | adapter-transformers | [
"adapter-transformers",
"safetensors",
"paddleocr_vl",
"image-text-to-text",
"text-generation-inference",
"transformers",
"unsloth",
"trl",
"sft",
"conversational",
"custom_code",
"en",
"dataset:tachiwin/multilingual_ocr_llm_2",
"base_model:PaddlePaddle/PaddleOCR-VL-1.5",
"base_model:ada... | image-text-to-text | 2026-02-24T17:34:47Z | # TachiwinOCR 1.5 🦡
**for the Indigenous Languages of Mexico**
This is a PaddleOCR-VL Finetune specialized in the 68 indigenous languages of Mexico and their diverse character and glyph repertoire making a world first in tech access and linguistic rights
## Inference
You can perform inference using the `PaddleOCR` p... | [] |
mradermacher/kanana-1.5-2.1b-base-GGUF | mradermacher | 2025-09-08T14:31:36Z | 35 | 0 | transformers | [
"transformers",
"gguf",
"en",
"ko",
"base_model:OpenLLM-Korea/kanana-1.5-2.1b-base",
"base_model:quantized:OpenLLM-Korea/kanana-1.5-2.1b-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-08T14:04:06Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static qu... | [] |
Premchan369/quantum-ai-smart-grid-india | Premchan369 | 2026-03-13T15:53:39Z | 0 | 0 | null | [
"quantum-computing",
"smart-grid",
"energy-forecasting",
"time-series",
"ev-charging",
"digital-twin",
"india",
"lstm",
"transformer",
"qaoa",
"en",
"license:mit",
"region:us"
] | null | 2026-03-13T15:31:17Z | # ⚡ Quantum-AI Digital Twin — Indian Smart Grid Optimization
[](https://huggingface.co/Premchan369/quantum-ai-smart-grid-india)
[](https://huggingface.co/spaces/Premchan3... | [] |
FiveC/BartBanaFinal-Combine | FiveC | 2026-02-17T12:48:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:IAmSkyDra/BARTBana_v5",
"base_model:finetune:IAmSkyDra/BARTBana_v5",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2026-02-17T12:16:50Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BartBanaFinal-Combine
This model is a fine-tuned version of [IAmSkyDra/BARTBana_v5](https://huggingface.co/IAmSkyDra/BARTBana_v5)... | [] |
kmd2525/qwen3-4b-structured-output-lora-v4.1 | kmd2525 | 2026-02-09T08:21:28Z | 5 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"v4.1",
"text-generation",
"en",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v2",
"dataset:u-10bei/structured_data_with_cot_dataset_512_v4",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instru... | text-generation | 2026-02-09T08:19:15Z | # qwen3-4b-structured-output-lora-v4.1
This repository provides a **LoRA adapter (v4.1)** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Version: v4.1 — Data Curation (k=50)
This i... | [] |
anjajar/baby_goldfish_rus | anjajar | 2026-03-27T10:52:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-03-27T10:38:02Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# baby_goldfish_rus
This model is a fine-tuned version of [gpt_small_config.json](https://huggingface.co/gpt_small_config.json) on ... | [] |
spensly/Llama-3.2-3B-Instruct-spensly-ai-ORPO-Q4_K_M-GGUF | spensly | 2026-02-12T08:34:05Z | 33 | 0 | null | [
"gguf",
"llama",
"llama.cpp",
"unsloth",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-02-11T17:45:46Z | # Llama-3.2-3B Spensly AI: Smart Financial Guidance 🌟
“Think before you spend!”
This model has been **finetuned** specifically for [Spensly](https://www.spensly.com/) and converted to **GGUF format** for high compatibility and performance.
Spensly is an AI-powered financial coaching platform designed to help users... | [
{
"start": 188,
"end": 199,
"text": "GGUF format",
"label": "training method",
"score": 0.8175577521324158
},
{
"start": 1791,
"end": 1802,
"text": "GGUF format",
"label": "training method",
"score": 0.7316351532936096
}
] |
BAAI/Emu3-Gen-hf | BAAI | 2025-05-23T07:47:13Z | 997 | 2 | null | [
"safetensors",
"emu3",
"vision",
"image-text-to-text",
"conversational",
"en",
"arxiv:2409.18869",
"license:apache-2.0",
"region:us"
] | image-text-to-text | 2024-10-24T09:09:07Z | <div align='center'>
<h1>Emu3: Next-Token Prediction is All You Need</h1h1>
<h3></h3>
[Emu3 Team, BAAI](https://www.baai.ac.cn/english.html)
</div>
<div align='left'>
<img src="https://github.com/baaivision/Emu3/blob/main/assets/arch.png?raw=True" class="interpolation-image" alt="arch." height="80%" width="70%" />
... | [] |
AITRADER/ltx2-distilled-8bit-mlx | AITRADER | 2026-02-11T19:33:07Z | 73 | 0 | mlx | [
"mlx",
"diffusers",
"safetensors",
"video-generation",
"apple-silicon",
"ltx-2",
"distilled",
"quantized",
"license:other",
"region:us"
] | null | 2026-01-31T14:34:59Z | # LTX-2 19B Distilled (8-bit) - MLX
This is a 8-bit quantized version of the [LTX-2 19B Distilled](https://huggingface.co/Lightricks/LTX-2) model, optimized for Apple Silicon using MLX.
## Model Description
LTX-2 is a state-of-the-art video generation model from Lightricks. This version has been quantized to 8-bit p... | [] |
w341e/Qwen3.5-397B-A17B-REAP-28-NVFP4 | w341e | 2026-03-29T07:53:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_5_moe",
"image-text-to-text",
"qwen3.5",
"moe",
"pruned",
"reap",
"nvfp4",
"fp4",
"text-generation",
"conversational",
"arxiv:2510.13999",
"base_model:Qwen/Qwen3.5-397B-A17B",
"base_model:quantized:Qwen/Qwen3.5-397B-A17B",
"license:apache-2.0",
"... | text-generation | 2026-03-29T07:53:37Z | edit: with vllm, use --language-model-only , have not figured this one out yet.
# Qwen3.5-397B-A17B — REAP 28% Pruned, NVFP4
A personal experiment in aggressive MoE pruning. The goal: fit Qwen3.5-397B on **2× 96GB Blackwell GPUs** with usable KV cache (~90K tokens), without losing quality.
## What this is
28% of ex... | [] |
Robinbisht20/Kimi-K2-Instruct | Robinbisht20 | 2026-02-16T14:24:58Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"kimi_k2",
"text-generation",
"conversational",
"custom_code",
"license:other",
"endpoints_compatible",
"fp8",
"region:us"
] | text-generation | 2026-02-16T14:24:58Z | <div align="center">
<picture>
<img src="figures/kimi-logo.png" width="30%" alt="Kimi K2: Open Agentic Intellignece">
</picture>
</div>
<hr>
<div align="center" style="line-height:1">
<a href="https://www.kimi.com" target="_blank"><img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-Kimi%20K2-ff6b6... | [] |
inferencerlabs/DeepSeek-V4-Pro-MLX-2.8bit-EXP | inferencerlabs | 2026-04-29T23:42:07Z | 798 | 1 | mlx | [
"mlx",
"deepseek_v4",
"quantized",
"text-generation",
"conversational",
"en",
"base_model:deepseek-ai/DeepSeek-V4-Pro",
"base_model:quantized:deepseek-ai/DeepSeek-V4-Pro",
"region:us"
] | text-generation | 2026-04-27T14:17:35Z | # NOTICE
<marquee direction="left" width="400">
<h1 style="font-size: 60px; color: red;">CURRENTLY UPLOADING... </h1>
</marquee>
**See DeepSeek-V4-Pro MLX in action - [demonstration videos](https://youtube.com/xcreate)**
#### Tested with an M3 Ultra 512 GiB and M4 Max 128 GiB RAM u... | [] |
mradermacher/SpatialThinker-3B-i1-GGUF | mradermacher | 2026-01-31T09:10:04Z | 92 | 2 | transformers | [
"transformers",
"gguf",
"spatial-reasoning",
"multimodal",
"vision-language",
"scene-graph",
"reinforcement-learning",
"en",
"dataset:OX-PIXL/STVQA-7K",
"base_model:OX-PIXL/SpatialThinker-3B",
"base_model:quantized:OX-PIXL/SpatialThinker-3B",
"license:apache-2.0",
"endpoints_compatible",
"... | reinforcement-learning | 2025-11-08T21:55:02Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
Glazkov/qwen2.5-vl-table-extraction-ru-v0.1 | Glazkov | 2025-09-27T11:54:49Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"vision-language-model",
"table-extraction",
"financial-documents",
"qwen2.5-vl",
"fine-tuned",
"image-to-text",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-to-text | 2025-09-27T11:53:07Z | # Glazkov/qwen2.5-vl-table-extraction-ru-v0.1
Fine-tuned Qwen2.5-VL-3B model for extracting structured data from financial and economic table images. This model has been trained on synthetic table data to convert table images into JSON format with parameter, date, value, and measurement fields.
## Model Details
- **... | [] |
Fernandess/Qwen3-4B-SFT | Fernandess | 2025-08-10T13:16:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"unsloth",
"trl",
"base_model:unsloth/Qwen3-4B-Base",
"base_model:finetune:unsloth/Qwen3-4B-Base",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T13:12:02Z | # Model Card for Qwen3-4B-SFT
This model is a fine-tuned version of [unsloth/Qwen3-4B-Base](https://huggingface.co/unsloth/Qwen3-4B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could ... | [] |
epfl-ml-ytf/apertus-8b-pruned-english-ds-63159 | epfl-ml-ytf | 2025-12-18T14:16:46Z | 0 | 0 | null | [
"safetensors",
"apertus",
"academic-project",
"pruning",
"vocabulary-pruning",
"nlp",
"llm",
"ml optimization",
"en",
"base_model:swiss-ai/Apertus-8B-Instruct-2509",
"base_model:finetune:swiss-ai/Apertus-8B-Instruct-2509",
"license:apache-2.0",
"region:us"
] | null | 2025-12-17T21:15:48Z | # Model Card for Apertus-8B_pruned-english-ds
## Model Summary
This model is a vocabulary-pruned English-only version of [swiss-ai/Apertus-8B-Instruct-2509](https://huggingface.co/swiss-ai/Apertus-8B-Instruct-2509).
It was created as part of an academic project in Machine Learning to
investigate the effects of voca... | [] |
huihui-ai/Qwen2.5-VL-32B-Instruct-abliterated | huihui-ai | 2025-11-07T23:37:16Z | 637 | 18 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"multimodal",
"abliterated",
"uncensored",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-VL-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-32B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints... | image-text-to-text | 2025-04-01T18:17:25Z | # huihui-ai/Qwen2.5-VL-32B-Instruct-abliterated
This is an uncensored version of [Qwen/Qwen2.5-VL-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it)... | [] |
fuchi0000/qwen3-4b-structured-output-lora-ver2 | fuchi0000 | 2026-02-23T02:42:51Z | 6 | 0 | peft | [
"peft",
"safetensors",
"qlora",
"lora",
"structured-output",
"text-generation",
"en",
"dataset:daichira/structured-5k-mix-sft",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | text-generation | 2026-02-23T02:42:24Z | qwen3-4b-structured-output-lora-ver2
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improv... | [
{
"start": 138,
"end": 143,
"text": "QLoRA",
"label": "training method",
"score": 0.8026993274688721
},
{
"start": 579,
"end": 584,
"text": "QLoRA",
"label": "training method",
"score": 0.7214235663414001
}
] |
vollmannv/35f76dd0-983f-418a-997c-9036535c747d | vollmannv | 2026-03-12T13:08:44Z | 1,660 | 0 | mlx | [
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:mlx-community/Qwen3-4B-4bit",
"base_model:quantized:mlx-community/Qwen3-4B-4bit",
"license:apache-2.0",
"4-bit",
"region:us"
] | text-generation | 2026-03-12T00:10:56Z | # vollmannv/Qwen3-4B-4bit-sprich-german
This model [vollmannv/Qwen3-4B-4bit-sprich-german](https://huggingface.co/vollmannv/Qwen3-4B-4bit-sprich-german) was
converted to MLX format from [mlx-community/Qwen3-4B-4bit](https://huggingface.co/mlx-community/Qwen3-4B-4bit)
using mlx-lm version **0.30.7**.
## Use with mlx
... | [] |
g-group-ai-lab/gwen-tts-0.6B | g-group-ai-lab | 2026-04-03T05:38:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3_tts",
"text-generation",
"tts",
"voice-cloning",
"vietnamese",
"gwen-tts",
"qwen3-tts",
"speech-synthesis",
"text-to-speech",
"vi",
"zh",
"en",
"ja",
"ko",
"fr",
"de",
"it",
"pt",
"ru",
"es",
"base_model:Qwen/Qwen3-TTS-12Hz-0.6B-Base"... | text-to-speech | 2026-04-02T11:46:54Z | # Gwen-TTS 0.6B - Natural Vietnamese Voice Cloning
**Gwen-TTS** is a Vietnamese text-to-speech model with natural voice cloning capability.
**Key highlights:**
- Clone any voice with just a few seconds of reference audio
- Natural and expressive Vietnamese voice cloning
- Finetuned from [Qwen3-TTS-0.6B](https://huggi... | [] |
dobrien/ViT-B-32-SUN397-dummy-TINet-1e-4-arithmetic | dobrien | 2026-04-05T01:51:20Z | 0 | 0 | null | [
"pytorch",
"region:us"
] | null | 2026-02-20T02:40:43Z | ## Dataset: SUN397
## Dataset Location: tanganke/sun397
## Dummy Dataset: TINet
## Dummy Dataset Location: zh-plus/tiny-imagenet
## Loss Term: 1e-4
## Merge Method: arithmetic
## Test-Set Accuracy: 0.7504618167877197
## Test-Set Loss: 1.1089748680591582
##... | [] |
Godwinlyamba/Affine-messi4-5E79zqy9QBpm9iQ3tirXMqefD7vdmQM3qCJMAHrpX9NxE3V1 | Godwinlyamba | 2026-02-10T08:01:27Z | 10 | 0 | null | [
"safetensors",
"qwen3",
"region:us"
] | null | 2026-02-10T08:00:36Z | # Affine Model - UID67
This is a Qwen3-based model optimized for the Affine subnet.
## Model Details
- **Architecture**: Qwen3ForCausalLM
- **Parameters**: ~14.8B
- **Context Length**: 40,960 tokens
- **Precision**: bfloat16
- **Format**: ChatML with `<think>` reasoning
## Usage
```python
from transformers import ... | [] |
YousefBadr/gptneo-medical-lora | YousefBadr | 2026-02-21T08:52:00Z | 7 | 1 | transformers | [
"transformers",
"safetensors",
"gpt_neo",
"text-generation",
"medical",
"question-answering",
"causal-lm",
"lora",
"en",
"dataset:medalpaca/medical_meadow_medical_flashcards",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:adapter:EleutherAI/gpt-neo-125m",
"license:apache-2.0",
"endpoin... | text-generation | 2026-02-20T16:51:55Z | # GPT-Neo Medical LoRA Fine-Tuned Model
## Model Overview
This model is a medical-domain fine-tuned version of **EleutherAI/gpt-neo-125M**, trained using **LoRA (Low-Rank Adaptation)** on the **Medical Meadow Medical Flashcards dataset**.
The model generates accurate, structured medical responses given a medical ins... | [
{
"start": 18,
"end": 22,
"text": "LoRA",
"label": "training method",
"score": 0.7634351849555969
},
{
"start": 158,
"end": 162,
"text": "LoRA",
"label": "training method",
"score": 0.8781806230545044
},
{
"start": 164,
"end": 183,
"text": "Low-Rank Adapta... |
JosianaSilva/santander-customer-prediction | JosianaSilva | 2025-11-26T21:09:32Z | 0 | 0 | null | [
"region:us"
] | null | 2025-11-26T18:34:40Z | # Santander Customer Transaction Prediction
Modelo para predição de transações de clientes baseado no dataset do Kaggle Santander Customer Transaction Prediction.
Este modelo foi treinado usando Gradient Boosting Classifier com redução de dimensionalidade via PCA.
## Metrics
- **Accuracy**: 0.9138 (91.38%)
-... | [] |
mradermacher/Composition-RL-8B-i1-GGUF | mradermacher | 2026-04-29T10:00:09Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:xx18/Composition-RL-8B",
"base_model:quantized:xx18/Composition-RL-8B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2026-04-29T09:02:00Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_... | [] |
kanishka/opt-babylm1-ntb-ntx_seed-1024_5e-6 | kanishka | 2026-04-26T02:57:32Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2026-04-25T18:51:35Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-babylm1-ntb-ntx_seed-1024_5e-6
This model was trained from scratch on an unknown dataset.
It achieves the following results o... | [] |
Jageen/music-4func | Jageen | 2026-01-06T00:50:10Z | 0 | 0 | peft | [
"peft",
"safetensors",
"function-calling",
"music",
"lora",
"functiongemma",
"gemma",
"fine-tuning",
"music-assistant",
"text-generation",
"conversational",
"arxiv:2106.09685",
"base_model:google/functiongemma-270m-it",
"base_model:adapter:google/functiongemma-270m-it",
"license:gemma",
... | text-generation | 2026-01-04T02:23:53Z | # 🎵 Music Assistant - 4 Functions (Fine-tuned FunctionGemma)
Fine-tuned [FunctionGemma-270M](https://huggingface.co/google/functiongemma-270m-it) for music control function calling using LoRA. Achieves **98.9% training accuracy** and **100% test accuracy** on 4 music control functions.
## Model Details
### Base Mod... | [
{
"start": 408,
"end": 412,
"text": "LoRA",
"label": "training method",
"score": 0.7513742446899414
}
] |
ClutchKrishna/scam-detector-v2 | ClutchKrishna | 2026-04-29T21:22:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
... | text-classification | 2026-04-29T16:02:10Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scam-detector-v2
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilber... | [] |
kesbeast23/mms-curriculum-wer | kesbeast23 | 2025-12-27T02:13:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-12-27T02:12:53Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-curriculum-wer
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the... | [] |
cristian-untaru/distilbert-medical-triage | cristian-untaru | 2026-05-04T21:25:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"medical-triage",
"healthcare",
"symptom-checker",
"natural-language-processing",
"academic-project",
"en",
"dataset:cristian-untaru/symcat-medical-triage-dataset",
"base_model:distilbert/distilbert-base-uncased",
"base_mode... | text-classification | 2026-05-04T18:17:23Z | # DistilBERT Medical Triage
This repository contains a fine-tuned DistilBERT model for medical pre-triage text classification. The model receives a natural-language symptom description and predicts one of three triage-oriented risk levels:
- `self_monitor`
- `consult_gp`
- `urgent`
This model was developed as part o... | [] |
UnifiedHorusRA/Princess_Peach_-_The_Super_Mario_Bros._Movie_Wan_Video_2.2_T2V-A14B | UnifiedHorusRA | 2025-09-13T22:15:28Z | 0 | 0 | null | [
"custom",
"art",
"en",
"region:us"
] | null | 2025-09-13T22:15:26Z | # Princess Peach - The Super Mario Bros. Movie (Wan Video 2.2 T2V-A14B)
**Creator**: [Kong__](https://civitai.com/user/Kong__)
**Civitai Model Page**: [https://civitai.com/models/1953877](https://civitai.com/models/1953877)
---
This repository contains multiple versions of the 'Princess Peach - The Super Mario Bros.... | [] |
linus-b/gemma-3-12b-it-CDF-teacher | linus-b | 2025-12-11T17:08:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-12b-it",
"base_model:finetune:google/gemma-3-12b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-12-11T16:54:57Z | # Model Card for gemma-3-12b-it-CDF-teacher
This model is a fine-tuned version of [google/gemma-3-12b-it](https://huggingface.co/google/gemma-3-12b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machi... | [] |
MattBou00/ROUND5ACTUALRETRYRUNNINGCODE-checkpoint-epoch-60 | MattBou00 | 2025-11-21T15:32:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | 2025-11-21T15:16:29Z | # TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL... | [] |
Xerv-AI/Ada | Xerv-AI | 2026-04-28T13:32:06Z | 439 | 0 | null | [
"safetensors",
"qwen2",
"unsloth",
"qwen",
"qwen2.5",
"math",
"reasoning",
"alpaca",
"pytorch",
"custom-finetune",
"lor-merged",
"text-generation",
"en",
"dataset:Xerv-AI/GRAD",
"dataset:yahma/alpaca-cleaned",
"base_model:unsloth/Qwen2.5-Math-1.5B",
"base_model:finetune:unsloth/Qwen2... | text-generation | 2026-04-25T04:04:11Z | ## 🌌 Xerv-AI/Ada: The Multi-Modal Mathematical Generalist SLM
**Ada** is an ultra-lightweight, high-speed, and highly optimized reasoning Small Language Model (SLM) derived from the powerful **Qwen2.5-Math-1.5B** architecture. Engineered specifically to bridge the gap between hyper-specialized graduate-level mathemati... | [] |
leocneves/dobrar_pano_eval | leocneves | 2025-10-19T05:13:51Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:leocneves/dobrar_pano",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-10-19T05:13:25Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
edmundwlo/pi05_put_down | edmundwlo | 2026-01-31T08:59:31Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"pi05",
"robotics",
"dataset:edmundwlo/put_down",
"license:apache-2.0",
"region:us"
] | robotics | 2026-01-31T08:53:03Z | # Model Card for pi05
<!-- Provide a quick summary of what the model is/does. -->
**π₀.₅ (Pi05) Policy**
π₀.₅ is a Vision-Language-Action model with open-world generalization, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀.₅ repres... | [] |
flaviovicentinilhp2025/qwen3-4b-banking-qlora-v1-seed123 | flaviovicentinilhp2025 | 2026-04-26T22:42:14Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:adapter:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"region:us"
] | null | 2026-04-26T21:41:09Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen3-4b-banking-qlora-v1-seed123
This model is a fine-tuned version of [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen... | [] |
phospho-app/ACT_BBOX-pick_marker-e2d0pll6qe | phospho-app | 2025-10-05T15:10:53Z | 0 | 0 | phosphobot | [
"phosphobot",
"safetensors",
"act",
"robotics",
"dataset:tremmelnicholas/pick_marker",
"region:us"
] | robotics | 2025-10-05T14:54:11Z | ---
datasets: tremmelnicholas/pick_marker
library_name: phosphobot
pipeline_tag: robotics
model_name: act
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act model - 🧪 phosphobot training pipeline
- **Dataset**: [tremmelnicholas/pick_marker](https://huggingface.co/datasets/tremmelnicholas/pick_marker)
- *... | [] |
lgcnsrobot/act_exhb_100_0416_over | lgcnsrobot | 2026-04-16T16:11:16Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:lgcnsrobot/G1-exhibition-60-real-260416",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2026-04-16T16:10:50Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.831265389919281
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8477550148963928
},
{
"start": 865,
"end": 868,
"text": "act",
"label":... |
Thireus/Qwen3.5-2B-THIREUS-BF16-SPECIAL_SPLIT | Thireus | 2026-04-13T10:58:20Z | 11 | 0 | null | [
"gguf",
"arxiv:2505.23786",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2026-03-08T18:33:04Z | # Qwen3.5-2B
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/Qwen3.5-2B-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the Qwen3.5-2B model (official repo: https://huggingface.co/Qwen/Qwen3.5-2B). These GGUF shards are designed to be used with **... | [] |
Stormtrooperaim/llama3.1-TitanForge-8B-Q4_K_M-GGUF | Stormtrooperaim | 2025-12-14T04:34:03Z | 3 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"llama-cpp",
"gguf-my-repo",
"base_model:Stormtrooperaim/llama3.1-TitanForge-8B",
"base_model:quantized:Stormtrooperaim/llama3.1-TitanForge-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-12-14T04:33:40Z | # Stormtrooperaim/TitanForge-8B-Q4_K_M-GGUF
This model was converted to GGUF format from [`Stormtrooperaim/TitanForge-8B`](https://huggingface.co/Stormtrooperaim/TitanForge-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card]... | [] |
s3y/cupball2 | s3y | 2025-08-17T21:28:32Z | 0 | 0 | lerobot | [
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:s3y/ball-in-cup2",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] | robotics | 2025-08-17T21:26:56Z | # Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high succ... | [
{
"start": 17,
"end": 20,
"text": "act",
"label": "training method",
"score": 0.8059530854225159
},
{
"start": 120,
"end": 123,
"text": "ACT",
"label": "training method",
"score": 0.8365488052368164
},
{
"start": 883,
"end": 886,
"text": "act",
"label"... |
vasista22/whisper-tamil-medium | vasista22 | 2023-04-24T21:04:25Z | 907 | 16 | transformers | [
"transformers",
"pytorch",
"jax",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"ta",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-21T19:15:23Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tamil Medium
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) ... | [] |
mradermacher/KiteResolve-20B-i1-GGUF | mradermacher | 2025-12-31T21:15:47Z | 699 | 0 | transformers | [
"transformers",
"gguf",
"merge-conflicts",
"git-automation",
"developer-tools",
"code-generation",
"version-control",
"devops",
"en",
"dataset:SoarAILabs/merge-conflict-dataset",
"base_model:SoarAILabs/KiteResolve-20B",
"base_model:quantized:SoarAILabs/KiteResolve-20B",
"license:apache-2.0",... | null | 2025-09-08T11:09:08Z | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K... | [] |
parallelm/gpt2_small_EN_unigram_81920_parallel10_42 | parallelm | 2025-11-16T17:46:29Z | 14 | 0 | null | [
"safetensors",
"gpt2",
"generated_from_trainer",
"region:us"
] | null | 2025-11-16T17:46:12Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_small_EN_unigram_81920_parallel10_42
This model was trained from scratch on an unknown dataset.
It achieves the following re... | [] |
mlx-community/Hermes-4-70B-8bit | mlx-community | 2025-08-26T22:20:43Z | 25 | 2 | mlx | [
"mlx",
"safetensors",
"llama",
"Llama-3.1",
"instruct",
"finetune",
"reasoning",
"hybrid-mode",
"chatml",
"function calling",
"tool use",
"json mode",
"structured outputs",
"atropos",
"dataforge",
"long context",
"roleplaying",
"chat",
"text-generation",
"conversational",
"en... | text-generation | 2025-08-26T21:33:06Z | # mlx-community/Hermes-4-70B-8bit
This model [mlx-community/Hermes-4-70B-8bit](https://huggingface.co/mlx-community/Hermes-4-70B-8bit) was
converted to MLX format from [NousResearch/Hermes-4-70B](https://huggingface.co/NousResearch/Hermes-4-70B)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install ml... | [] |
arcturus14/ag_news_model | arcturus14 | 2026-02-26T12:39:41Z | 20 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google/bert_uncased_L-2_H-128_A-2",
"base_model:finetune:google/bert_uncased_L-2_H-128_A-2",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2026-02-26T12:33:54Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ag_news_model
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncase... | [] |
g-assismoraes/Qwen3-4B-Base-agnews | g-assismoraes | 2025-08-22T08:54:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-4B-Base",
"base_model:finetune:Qwen/Qwen3-4B-Base",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-08-21T04:22:02Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen3-4B-Base-agnews
This model is a fine-tuned version of [Qwen/Qwen3-4B-Base](https://huggingface.co/Qwen/Qwen3-4B-Base) on the... | [] |
Shoriful025/sentiment_analysis_bert_multilingual | Shoriful025 | 2025-12-23T16:04:55Z | 0 | 0 | null | [
"bert",
"region:us"
] | null | 2025-12-23T16:04:23Z | # sentiment_analysis_bert_multilingual
## Overview
This model is a fine-tuned version of the Multilingual BERT (mBERT) base model. It is designed to classify the sentiment of text across 100+ languages into three categories: Negative, Neutral, and Positive.
## Model Architecture
The model utilizes the standard BERT-b... | [] |
tanganke/convnext-base-224_rendered-sst2_sgd_batch-size-64_lr-0.01_steps-4000 | tanganke | 2026-01-12T08:33:13Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"convnext",
"image-classification",
"fusion-bench",
"merge",
"base_model:facebook/convnext-base-224",
"base_model:finetune:facebook/convnext-base-224",
"endpoints_compatible",
"region:us"
] | image-classification | 2026-01-12T08:32:11Z | # Deep Model Fusion
Fine-tuned ResNet model on dataset rendered-sst2.
## Models Merged
This is a merged model created using [fusion-bench](https://github.com/tanganke/fusion_bench).
The following models were included in the merge:
- base model: facebook/convnext-base-224
## Configuration
The following YAML con... | [] |
gamlin/omnichannel-contact-center | gamlin | 2026-04-28T01:16:32Z | 0 | 0 | null | [
"vicidial",
"call-center",
"omnichannel",
"contact",
"center",
"license:mit",
"region:us"
] | null | 2026-04-28T01:16:32Z | # Omnichannel Contact Center
**Companies with strong omnichannel strategies retain 89% of their customers. Companies without retain 33%. Campaigns using three or more channels see a 287% higher purchase rate. And yet, only about a third of contact centers have actually unified their channels into a single agent experi... | [] |
yucaiZ/lab1_finetuning | yucaiZ | 2026-02-20T07:57:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2026-02-20T05:16:43Z | <!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lab1_finetuning
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en... | [] |
yuko29hu/Qwen3-4B-Instruct-2507-Q5_K_M-GGUF | yuko29hu | 2025-08-28T08:18:43Z | 31 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:quantized:Qwen/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-08-28T08:18:27Z | # yuko29hu/Qwen3-4B-Instruct-2507-Q5_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3-4B-Instruct-2507`](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](h... | [] |
reece-omahoney/adv-libero-success-eighth-eplen | reece-omahoney | 2026-03-26T18:52:14Z | 65 | 0 | lerobot | [
"lerobot",
"safetensors",
"advantage",
"robotics",
"dataset:reece-omahoney/libero-10-adv-success-eighth-eplen",
"license:apache-2.0",
"region:us"
] | robotics | 2026-03-25T20:31:05Z | # Model Card for advantage
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingf... | [] |
cs4248-nlp/paper-s3-a3-bimga-query-only-tinybert-general-4l-312d-taco-hf-20260410-234932 | cs4248-nlp | 2026-04-12T12:11:13Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"code-search",
"embeddings",
"knowledge-distillation",
"en",
"license:mit",
"region:us"
] | null | 2026-04-12T12:10:53Z | # cs4248-nlp/paper-s3-a3-bimga-query-only-tinybert-general-4l-312d-taco-hf-20260410-234932
Code-search embedding model trained with the CS4248 two-phase KD pipeline.
## Model details
| Field | Value |
|-------|-------|
| Role | `s3-A3-bimga-query-only` |
| Phase | Phase 2 |
| Method | `s3-A3-bimga-query-only` |
| Da... | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.