zRzRzRzRzRzRzR commited on
Commit
a930807
·
2 Parent(s): 9980c03 279ecdf

Merge branch 'main' of hf.co:zai-org/GLM-4.7-Flash

Browse files
Files changed (1) hide show
  1. README.md +7 -1
README.md CHANGED
@@ -82,7 +82,12 @@ pip install git+https://github.com/huggingface/transformers.git
82
 
83
  ### SGLang
84
 
85
- + using pip install sglang from source, then update transformers to the latest main branch.
 
 
 
 
 
86
 
87
  ### transformers
88
 
@@ -149,6 +154,7 @@ python3 -m sglang.launch_server \
149
  --host 0.0.0.0 \
150
  --port 8000
151
  ```
 
152
 
153
  ## Citation
154
 
 
82
 
83
  ### SGLang
84
 
85
+ + Install the supported versions of SGLang and Transformers (using `uv` is recommended):
86
+
87
+ ```shell
88
+ uv pip install sglang==0.3.2.dev9039+pr-17247.g90c446848 --extra-index-url https://sgl-project.github.io/whl/pr/
89
+ uv pip install git+https://github.com/huggingface/transformers.git@76732b4e7120808ff989edbd16401f61fa6a0afa
90
+ ```
91
 
92
  ### transformers
93
 
 
154
  --host 0.0.0.0 \
155
  --port 8000
156
  ```
157
+ + For Blackwell GPUs, include `--attention-backend triton --speculative-draft-attention-backend triton` in your SGLang launch command.
158
 
159
  ## Citation
160