Firworks's picture
Update README.md
abee757 verified
metadata
datasets:
  - nvidia/OpenCodeInstruct
base_model:
  - IQuestLab/IQuest-Coder-V1-40B-Instruct
tags:
  - nvfp4
  - fp4
  - quantized

IQuest-Coder-V1-40B-Instruct-nvfp4

Format: NVFP4 — weights & activations quantized to FP4 with dual scaling.
Base model: IQuestLab/IQuest-Coder-V1-40B-Instruct
How it was made: One-shot calibration with LLM Compressor (NVFP4 recipe), long-seq calibration (256 samples of 4096 length) with nvidia/OpenCodeInstruct.

Notes: Keep lm_head in high precision; calibrate on long, domain-relevant sequences.

Check the original model card for information about this model.

Running the model with VLLM in Docker

Note: This model is not yet supported in a released VLLM container. There's some details on a pull request to support it here which could be used to run the model in VLLM. https://github.com/vllm-project/vllm/pull/31575

sudo docker run --runtime nvidia --gpus all -p 8000:8000 --ipc=host vllm/vllm-openai:nightly --model Firworks/IQuest-Coder-V1-40B-Instruct-nvfp4 --dtype auto --max-model-len 32768

This was tested on an RTX Pro 6000 Blackwell cloud instance.

If there are other models you're interested in seeing quantized to NVFP4 for use on the DGX Spark, or other modern Blackwell (or newer) cards let me know. I'm trying to make more NVFP4 models available to allow more people to try them out.