Update README.md
Browse files
README.md
CHANGED
|
@@ -4,19 +4,19 @@ language:
|
|
| 4 |
- zh
|
| 5 |
library_name: transformers
|
| 6 |
license: mit
|
|
|
|
| 7 |
tasks:
|
| 8 |
- text-generation
|
| 9 |
frameworks: PyTorch
|
| 10 |
-
pipeline_tag: text-to-text
|
| 11 |
tags:
|
| 12 |
- vLLM
|
| 13 |
- AWQ
|
| 14 |
base_model:
|
| 15 |
-
-
|
| 16 |
base_model_relation: quantized
|
| 17 |
---
|
| 18 |
# GLM-4.7-Flash-AWQ
|
| 19 |
-
Base model: [
|
| 20 |
|
| 21 |
|
| 22 |
### 【Dependencies / Installation】
|
|
@@ -42,7 +42,7 @@ export VLLM_USE_FLASHINFER_SAMPLER=0
|
|
| 42 |
export OMP_NUM_THREADS=4
|
| 43 |
|
| 44 |
vllm serve \
|
| 45 |
-
__YOUR_PATH__/
|
| 46 |
--served-model-name MY_MODEL_NAME \
|
| 47 |
--swap-space 4 \
|
| 48 |
--max-model-len 32768 \
|
|
@@ -72,8 +72,8 @@ vllm serve \
|
|
| 72 |
|
| 73 |
### 【Model Download】
|
| 74 |
```python
|
| 75 |
-
from
|
| 76 |
-
snapshot_download('
|
| 77 |
```
|
| 78 |
|
| 79 |
### 【Overview】
|
|
|
|
| 4 |
- zh
|
| 5 |
library_name: transformers
|
| 6 |
license: mit
|
| 7 |
+
pipeline_tag: text-generation
|
| 8 |
tasks:
|
| 9 |
- text-generation
|
| 10 |
frameworks: PyTorch
|
|
|
|
| 11 |
tags:
|
| 12 |
- vLLM
|
| 13 |
- AWQ
|
| 14 |
base_model:
|
| 15 |
+
- zai-org/GLM-4.7-Flash
|
| 16 |
base_model_relation: quantized
|
| 17 |
---
|
| 18 |
# GLM-4.7-Flash-AWQ
|
| 19 |
+
Base model: [zai-org/GLM-4.7-Flash](https://huggingface.co/zai-org/GLM-4.7-Flash)
|
| 20 |
|
| 21 |
|
| 22 |
### 【Dependencies / Installation】
|
|
|
|
| 42 |
export OMP_NUM_THREADS=4
|
| 43 |
|
| 44 |
vllm serve \
|
| 45 |
+
__YOUR_PATH__/QuantTrio/GLM-4.7-Flash-AWQ \
|
| 46 |
--served-model-name MY_MODEL_NAME \
|
| 47 |
--swap-space 4 \
|
| 48 |
--max-model-len 32768 \
|
|
|
|
| 72 |
|
| 73 |
### 【Model Download】
|
| 74 |
```python
|
| 75 |
+
from huggingface_hub import snapshot_download
|
| 76 |
+
snapshot_download('QuantTrio/GLM-4.7-Flash-AWQ', cache_dir="your_local_path")
|
| 77 |
```
|
| 78 |
|
| 79 |
### 【Overview】
|