DeepSeek-R1-Distill-Llama-8B-NVFP4 (Blackwell Optimized - WIP)

This repository contains a self-quantized version of DeepSeek-R1-Distill-Llama-8B using the NVIDIA NVFP4 format. This was produced on an Asus Ascent GX10 (NVIDIA GB10 Grace Blackwell) system using the NVIDIA ModelOptimizer playbook.

Hardware & Architecture

Host System: Asus Ascent GX10 (Desktop AI Supercomputer)

Accelerator: NVIDIA Blackwell (SM121 / GB10)

Memory: 128GB Coherent Unified Memory (LPDDR5X)

Format: NVFP4 (4-bit Floating Point) with two-level micro-block scaling.

Current Performance Status (Jan 2026)

tested on vllm but performance was not as expected it is a work in progress.

Deployment Details

Quantized using the standard NVIDIA NVFP4 playbook. This format is designed for 3.5x memory reduction compared to BF16 while utilizing hardware-level acceleration on Blackwell silicon.

License

Original weights by DeepSeek-AI are under the MIT License. This quantization follows those permissive terms.

Downloads last month
24
Safetensors
Model size
5B params
Tensor type
BF16
·
F8_E4M3
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for vipertsniper/DeepSeek-R1-Distill-Llama-8B-NVFP4

Quantized
(189)
this model