File size: 1,338 Bytes
f164a87 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 | ---
datasets:
- camel-ai/loong
base_model:
- khazarai/Chemistry-R1
tags:
- nvfp4
- fp4
- quantized
---
# Chemistry-R1-nvfp4
**Format:** NVFP4 — weights & activations quantized to FP4 with dual scaling.
**Base model:** `khazarai/Chemistry-R1`
**How it was made:** One-shot calibration with LLM Compressor (NVFP4 recipe), long-seq calibration (256 samples of 4096 length) with camel-ai/loong.
> Notes: Keep `lm_head` in high precision; calibrate on long, domain-relevant sequences.
Check the original model card for information about this model.
# Running the model with VLLM in Docker
```sh
sudo docker run --runtime nvidia --gpus all -p 8000:8000 --ipc=host vllm/vllm-openai:nightly --model Firworks/Chemistry-R1-nvfp4 --dtype auto --max-model-len 32768
```
# Running the model on the DGX Spark with VLLM in Docker
```sh
sudo docker run --gpus all --network host --ipc=host nvcr.io/nvidia/vllm:26.02-py3 vllm serve Firworks/Chemistry-R1-nvfp4 --dtype auto --max-model-len 32768
```
This was tested on a DGX Spark (GB10 Grace Blackwell Superchip, 128GB unified memory).
If there are other models you're interested in seeing quantized to NVFP4 for use on the DGX Spark, or other modern Blackwell (or newer) cards let me know. I'm trying to make more NVFP4 models available to allow more people to try them out.
|