Firworks's picture
Update README.md
13bc57d verified
metadata
datasets:
  - Rombo-Org/Optimized_Reasoning
base_model:
  - stepfun-ai/Step-3.5-Flash
tags:
  - nvfp4
  - fp4
  - quantized

Step-3.5-Flash-nvfp4

Note: This is mostly an experiment but I've been trying anything I can do to get this to complete in NVFP4. I finally cracked it! With a monkey patch I was able to get llm-compressor to produce something that at least looks like an NVFP4 quant of this model. However, I cannot get VLLM to load it. I only ran it with a single sample of the calibration data for troubleshooting but if I (or someone else) figure out how to get it to actually load in VLLM or Transformers I can go back and run it with a more normal calibration data set. Only mess with this if you want a technical challenge. Don't expect it to work.

Format: NVFP4 — weights & activations quantized to FP4 with dual scaling.
Base model: stepfun-ai/Step-3.5-Flash
How it was made: One-shot calibration with LLM Compressor (NVFP4 recipe), long-seq calibration (1 samples of 512 length) with Rombo-Org/Optimized_Reasoning.

Notes: Keep lm_head in high precision; calibrate on long, domain-relevant sequences.

Check the original model card for information about this model.

Running the model with VLLM in Docker

No idea how yet.

If there are other models you're interested in seeing quantized to NVFP4 for use on the DGX Spark, or other modern Blackwell (or newer) cards let me know. I'm trying to make more NVFP4 models available to allow more people to try them out.