Firworks's picture
Update README.md
b83834d verified
metadata
datasets:
  - HuggingFaceH4/ultrachat_200k
base_model:
  - utter-project/EuroLLM-22B-Instruct-2512
license: apache-2.0

EuroLLM-22B-Instruct-2512-nvfp4

Format: NVFP4 — weights & activations quantized to FP4 with dual scaling.
Base model: utter-project/EuroLLM-22B-Instruct-2512
How it was made: One-shot calibration with LLM Compressor (NVFP4 recipe), long-seq calibration (256 samples of 4096 length) with HuggingFaceH4/ultrachat_200k.

Notes: Keep lm_head in high precision; calibrate on long, domain-relevant sequences.

Check the original model card for information about this model.

Running the model with VLLM in Docker

sudo docker run --runtime nvidia --gpus all -p 8000:8000 --ipc=host vllm/vllm-openai:nightly --model Firworks/EuroLLM-22B-Instruct-2512-nvfp4 --dtype auto --max-model-len 32768

This was tested on an RTX Pro 6000 Blackwell cloud instance.

If there are other models you're interested in seeing quantized to NVFP4 for use on the DGX Spark, or other modern Blackwell (or newer) cards let me know. I'm trying to make more NVFP4 models available to allow more people to try them out.