Firworks commited on
Commit
173add4
·
verified ·
1 Parent(s): 9bedf50

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -15,8 +15,13 @@ base_model:
15
  Check the original model card for information about this model.
16
 
17
  # Running the model with VLLM in Docker
 
 
 
 
 
18
  ```sh
19
- sudo docker run --runtime nvidia --gpus all -p 8000:8000 --ipc=host vllm/vllm-openai:nightly --model Firworks/Bolmo-7B-nvfp4 --dtype auto --max-model-len 32768
20
  ```
21
  This was tested on an RTX Pro 6000 Blackwell cloud instance.
22
 
 
15
  Check the original model card for information about this model.
16
 
17
  # Running the model with VLLM in Docker
18
+ Requires xlstm (pip install xlstm==2.0.4).
19
+
20
+ As of vLLM 0.13.0rc2.dev118, vLLM does not support BolmoForCausalLM yet, so use Transformers for now.
21
+
22
+ Some day, this command will probably work.
23
  ```sh
24
+ sudo docker run --runtime nvidia --gpus all -p 8000:8000 --ipc=host vllm/vllm-openai:nightly --model Firworks/Bolmo-7B-nvfp4 --dtype auto --max-model-len 32768 --trust-remote-code
25
  ```
26
  This was tested on an RTX Pro 6000 Blackwell cloud instance.
27