Firworks commited on
Commit
8f310a8
·
verified ·
1 Parent(s): 1871ef7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -19,6 +19,7 @@ tags:
19
  Check the original model card for information about this model.
20
 
21
  # Running the model with VLLM in Docker
 
22
  ```sh
23
  sudo docker run --runtime nvidia --gpus all -p 8000:8000 --ipc=host vllm/vllm-openai:nightly --model Firworks/LFM2.5-1.2B-Base-nvfp4 --dtype auto --max-model-len 32768
24
  ```
 
19
  Check the original model card for information about this model.
20
 
21
  # Running the model with VLLM in Docker
22
+ Note: Currently it doesn't seem like VLLM will run an NVFP4 quantized model on the lfm2 code path. I'm working on a patch but don't have it working yet.
23
  ```sh
24
  sudo docker run --runtime nvidia --gpus all -p 8000:8000 --ipc=host vllm/vllm-openai:nightly --model Firworks/LFM2.5-1.2B-Base-nvfp4 --dtype auto --max-model-len 32768
25
  ```