INC4AI commited on
Commit
79c8a6b
·
verified ·
1 Parent(s): 6ae6d3b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -0
README.md CHANGED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - Qwen/Qwen3-Coder-Next
4
+ pipeline_tag: text-generation
5
+ ---
6
+
7
+ ## Model Details
8
+
9
+ This model is an int4 model with group_size 128 and symmetric quantization of [Qwen/Qwen3-Coder-Next](https://huggingface.co/Qwen/Qwen3-Coder-Next) generated by [intel/auto-round](https://github.com/intel/auto-round). Please follow the license of the original model.
10
+
11
+ ## How to Use
12
+
13
+ ### HF Usage
14
+
15
+ ```python
16
+ from transformers import AutoModelForCausalLM, AutoTokenizer
17
+
18
+ model_name = "Intel/Qwen3-Coder-Next-int4-AutoRound"
19
+
20
+ # load the tokenizer and the model
21
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
22
+ model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto")
23
+
24
+ # prepare the model input
25
+ prompt = "Write a quick sort algorithm."
26
+ messages = [{"role": "user", "content": prompt}]
27
+ text = tokenizer.apply_chat_template(
28
+ messages,
29
+ tokenize=False,
30
+ add_generation_prompt=True,
31
+ )
32
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
33
+
34
+ # conduct text completion
35
+ generated_ids = model.generate(**model_inputs, max_new_tokens=65536)
36
+ output_ids = generated_ids[0][len(model_inputs.input_ids[0]) :].tolist()
37
+
38
+ content = tokenizer.decode(output_ids, skip_special_tokens=True)
39
+
40
+ print("content:", content)
41
+ ```
42
+
43
+ ### VLLM Usage
44
+
45
+ ```bash
46
+ vllm serve Intel/Qwen3-Coder-Next-int4-AutoRound \
47
+ --trust-remote-code \
48
+ --dtype bfloat16 \
49
+ --tensor_parallel_size 1
50
+ ```
51
+
52
+ ## Generate the Model
53
+
54
+ ```bash
55
+ auto-round --model_name Qwen/Qwen3-Coder-Next --iters 200 --bits 4 --output_dir Qwen3-Coder-Next-int4-AutoRound
56
+ ```
57
+
58
+ ## Ethical Considerations and Limitations
59
+
60
+ The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
61
+ Therefore, before deploying any applications of the model, developers should perform safety testing.
62
+
63
+ ## Caveats and Recommendations
64
+
65
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
66
+ Here are a couple of useful links to learn more about Intel's AI software:
67
+
68
+ - [Intel Neural Compressor](https://github.com/intel/neural-compressor)
69
+ - [AutoRound](https://github.com/intel/auto-round)
70
+
71
+ ## Disclaimer
72
+
73
+ The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
74
+
75
+ ## Cite
76
+
77
+ ```
78
+ @article{cheng2023optimize,
79
+ title={Optimize weight rounding via signed gradient descent for the quantization of llms},
80
+ author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi},
81
+ journal={arXiv preprint arXiv:2309.05516},
82
+ year={2023}
83
+ }
84
+ ```
85
+
86
+ [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)