File size: 4,470 Bytes
6e021d3
 
 
 
 
46e6786
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
---
base_model:
- zai-org/GLM-4.7-Flash
---

## Model Details

This model is an int4 model with group_size 128 and symmetric quantization of [zai-org/GLM-4.7-Flash](https://huggingface.co/zai-org/GLM-4.7-Flash) generated by [intel/auto-round](https://github.com/intel/auto-round). Please refer to Section Generate the model for more details.
Please follow the license of the original model.

## How To Use

### INT4 Inference

#### Transformers (CPU/Intel GPU/CUDA)
  
 **Please make sure you have installed the auto_round package from the correct branch:**
```bash
pip install git+https://github.com/intel/auto-round.git@enable_glm4_moe_lite_quantization
```
  
```python

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# default: Load the model on the available device(s)
model_name = "Intel/GLM-4.7-Flash-int4-AutoRound"
model = AutoModelForCausalLM.from_pretrained(
    model_name, dtype="auto", device_map="auto"
)
messages = [{"role": "user", "content": "hello"}]
tokenizer = AutoTokenizer.from_pretrained(model_name)
inputs = tokenizer.apply_chat_template(
    messages,
    tokenize=True,
    add_generation_prompt=True,
    return_dict=True,
    return_tensors="pt",
)
inputs = inputs.to(model.device)
generated_ids = model.generate(**inputs, max_new_tokens=128, do_sample=False)
output_text = tokenizer.decode(generated_ids[0][inputs.input_ids.shape[1]:])
print(output_text)
"""
1.  **Analyze the user's input:** The user said "hello". This is a standard greeting.

2.  **Determine the intent:** The user is initiating a conversation. They want to know if I'm active and ready to help.

3.  **Formulate the response:**
    *   Acknowledge the greeting.
    *   Offer assistance.
    *   Keep it friendly and helpful.

4.  **Drafting the response (internal monologue/trial):**
    *   *Option 1:* Hello. How can I help? (Simple, direct)
    *   *Option 2

"""
```

#### vLLM (CPU/Intel GPU/CUDA)

```bash 
VLLM_USE_PRECOMPILED=1 pip install git+https://github.com/vllm-project/vllm.git@main
pip install git+https://github.com/huggingface/transformers.git
```

start a vllm server:
```bash
vllm serve Intel/GLM-4.7-Flash-int4-AutoRound \
     --host localhost \
     --tool-call-parser glm47 \
     --reasoning-parser glm45 \
     --enable-auto-tool-choice \
     --served-model-name glm-4.7-flash \
     --tensor-parallel-size 4 \
     --port 4321
```

send request:
```bash
curl --noproxy '*' http://127.0.0.1:4321/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "Intel/GLM-4.7-Flash-int4-AutoRound",
    "messages": [
      {"role": "user", "content": "hello"}
    ],
    "max_tokens": 256,
    "temperature": 0.6
  }'

"""
"""
```

### Generate the model
 **Please make sure you have installed the auto_round package from the correct branch:**
```bash
pip install git+https://github.com/intel/auto-round.git@enable_glm4_moe_lite_quantization
```

```bash
auto_round \
--model=zai-org/GLM-4.7-Flash \
--scheme "W4A16" \
--ignore_layers="shared_experts,layers.0.mlp" \
--format=auto_round \
--enable_torch_compile \
--output_dir=./tmp_autoround
```

## Ethical Considerations and Limitations

The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.

Therefore, before deploying any applications of the model, developers should perform safety testing.

## Caveats and Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.

Here are a couple of useful links to learn more about Intel's AI software:

- Intel Neural Compressor [link](https://github.com/intel/neural-compressor)

## Disclaimer

The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.

## Cite

@article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }

[arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)