FlagRelease commited on
Commit
d3f2c5d
·
1 Parent(s): 05d20e5

update README: reorder FlagOS and model download steps

Browse files
Files changed (1) hide show
  1. README.md +6 -8
README.md CHANGED
@@ -60,18 +60,18 @@ The MAPE between CUDA and FlagOS(CUDA as ground truth) is 2.2994%. You can easil
60
 
61
  ## Operation Steps
62
 
63
- ### Download Open-source Model Weights
64
 
65
  ```bash
66
- pip install modelscope
67
- modelscope download --model BAAI/RoboBrain-X0-Preview --local_dir /share/RoboBrain-X0-Preview
68
-
69
  ```
70
 
71
- ### Download FlagOS Image
72
 
73
  ```bash
74
- docker pull harbor.baai.ac.cn/flagrelease-public/flagrelease_nvidia_x0_norand
 
 
75
  ```
76
 
77
  ### Start the inference service
@@ -133,7 +133,6 @@ torch.backends.cudnn.benchmark = False
133
 
134
  1. You can try to load Model in BF16 for lower memory occupancy or quicker inference. But BF16 have only 7 precision bits, which cannnot constrain MAPE under 5% even if you launch two CUDA server on different GPUid on the same GPU server.
135
 
136
-
137
  # Contributing
138
 
139
  We warmly welcome global developers to join us:
@@ -143,7 +142,6 @@ We warmly welcome global developers to join us:
143
  3. Improve technical documentation
144
  4. Expand hardware adaptation support
145
 
146
-
147
  # License
148
 
149
  本模型的权重来源于BAAI/RoboBrain-X0-Preview,以apache2.0协议https://www.apache.org/licenses/LICENSE-2.0.txt开源。
 
60
 
61
  ## Operation Steps
62
 
63
+ ### Download FlagOS Image
64
 
65
  ```bash
66
+ docker pull harbor.baai.ac.cn/flagrelease-public/flagrelease_nvidia_x0_norand
 
 
67
  ```
68
 
69
+ ### Download Open-source Model Weights
70
 
71
  ```bash
72
+ pip install modelscope
73
+ modelscope download --model BAAI/RoboBrain-X0-Preview --local_dir /share/RoboBrain-X0-Preview
74
+
75
  ```
76
 
77
  ### Start the inference service
 
133
 
134
  1. You can try to load Model in BF16 for lower memory occupancy or quicker inference. But BF16 have only 7 precision bits, which cannnot constrain MAPE under 5% even if you launch two CUDA server on different GPUid on the same GPU server.
135
 
 
136
  # Contributing
137
 
138
  We warmly welcome global developers to join us:
 
142
  3. Improve technical documentation
143
  4. Expand hardware adaptation support
144
 
 
145
  # License
146
 
147
  本模型的权重来源于BAAI/RoboBrain-X0-Preview,以apache2.0协议https://www.apache.org/licenses/LICENSE-2.0.txt开源。