Update README.md
Browse files
README.md
CHANGED
|
@@ -28,11 +28,11 @@ Read more about how the model is trained and evaluted in our [technical report](
|
|
| 28 |
|
| 29 |
# Usage: Serve with SGLang
|
| 30 |
```bash
|
| 31 |
-
python -m sglang.launch_server --model-path RyanLi0802/Biomni-R0-Preview --port 30000 --host 0.0.0.0 --mem-fraction-static 0.8 --tp 2 --trust-remote-code --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":
|
| 32 |
```
|
| 33 |
This would require two GPUs with 80G VRAM. Alternatively, you may serve with 4 GPUs with 40G VRAM via `--tp 4`.
|
| 34 |
|
| 35 |
-
Note, `rope_scaling` might degrade performance on tasks with shorter trajectories. Please tune the rope scaling factor according to your usage.
|
| 36 |
|
| 37 |
To run inference with the Biomni-E1 environment, please follow the instructions in our [official repo](https://github.com/snap-stanford/biomni).
|
| 38 |
|
|
|
|
| 28 |
|
| 29 |
# Usage: Serve with SGLang
|
| 30 |
```bash
|
| 31 |
+
python -m sglang.launch_server --model-path RyanLi0802/Biomni-R0-Preview --port 30000 --host 0.0.0.0 --mem-fraction-static 0.8 --tp 2 --trust-remote-code --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":1.0,"original_max_position_embeddings":32768}, "max_position_embeddings": 131072}'
|
| 32 |
```
|
| 33 |
This would require two GPUs with 80G VRAM. Alternatively, you may serve with 4 GPUs with 40G VRAM via `--tp 4`.
|
| 34 |
|
| 35 |
+
Note, if your task would take significantly longer than the original 32768 context length, you may set `rope_scaling` factor to a number `>1.0` and `<=4.0` for smoother context window extension. However, `rope_scaling` might degrade performance on tasks with shorter trajectories. Please tune the rope scaling factor according to your usage.
|
| 36 |
|
| 37 |
To run inference with the Biomni-E1 environment, please follow the instructions in our [official repo](https://github.com/snap-stanford/biomni).
|
| 38 |
|