DeepSeek-R1-Distill-Llama-8B-NVFP4 (Blackwell Optimized - WIP)
This repository contains a self-quantized version of DeepSeek-R1-Distill-Llama-8B using the NVIDIA NVFP4 format. This was produced on an Asus Ascent GX10 (NVIDIA GB10 Grace Blackwell) system using the NVIDIA ModelOptimizer playbook.
Hardware & Architecture
Host System: Asus Ascent GX10 (Desktop AI Supercomputer)
Accelerator: NVIDIA Blackwell (SM121 / GB10)
Memory: 128GB Coherent Unified Memory (LPDDR5X)
Format: NVFP4 (4-bit Floating Point) with two-level micro-block scaling.
Current Performance Status (Jan 2026)
tested on vllm but performance was not as expected it is a work in progress.
Deployment Details
Quantized using the standard NVIDIA NVFP4 playbook. This format is designed for 3.5x memory reduction compared to BF16 while utilizing hardware-level acceleration on Blackwell silicon.
License
Original weights by DeepSeek-AI are under the MIT License. This quantization follows those permissive terms.
- Downloads last month
- 24
Model tree for vipertsniper/DeepSeek-R1-Distill-Llama-8B-NVFP4
Base model
deepseek-ai/DeepSeek-R1-Distill-Llama-8B