YCWTG commited on
Commit
a11e67e
·
verified ·
1 Parent(s): 75df2b1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -4,7 +4,7 @@ base_model:
4
  tags:
5
  - qwen3
6
  - moe
7
- - int3
8
  - quantized
9
  - autoround
10
  license: apache-2.0
@@ -268,6 +268,7 @@ Users (both direct and downstream) should be made aware of the risks, biases and
268
  Here are a couple of useful links to learn more about Intel's AI software:
269
 
270
  - [Intel Neural Compressor](https://github.com/intel/neural-compressor)
 
271
 
272
  ## Disclaimer
273
 
@@ -277,4 +278,4 @@ The license on this model does not constitute legal advice. We are not responsib
277
 
278
  @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }
279
 
280
- [arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
 
4
  tags:
5
  - qwen3
6
  - moe
7
+ - int4
8
  - quantized
9
  - autoround
10
  license: apache-2.0
 
268
  Here are a couple of useful links to learn more about Intel's AI software:
269
 
270
  - [Intel Neural Compressor](https://github.com/intel/neural-compressor)
271
+ - [AutoRound](https://github.com/intel/auto-round)
272
 
273
  ## Disclaimer
274
 
 
278
 
279
  @article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }
280
 
281
+ [arxiv](https://arxiv.org/abs/2309.05516)