|
|
--- |
|
|
license: llama3.3 |
|
|
base_model: |
|
|
- meta-llama/Llama-3.3-70B-Instruct |
|
|
--- |
|
|
|
|
|
## Model Details |
|
|
|
|
|
This model card is for mxfp8/mxfp4/mxfp4_mixed quantization of [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) based on [intel/auto-round](https://github.com/intel/auto-round). |
|
|
The models are not able to be published due to the license limitation. Please follow the INC example README to generate and evaluate the low precision models. |
|
|
|
|
|
## How to Use |
|
|
|
|
|
The step-by-step README of quantization and evaluation can be found in [Intel Neural Compressor Examples](https://github.com/intel/neural-compressor/blob/master/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/auto_round/llama3/README.md). |
|
|
|
|
|
## Evaluate Results |
|
|
|
|
|
| Task | backend | BF16 | MXFP8 | MXFP4(mixed) | |
|
|
|:-------------------:|:-------:|:------:|:------:|:------------:| |
|
|
| hellaswag | vllm | 0.6661 | 0.6660 | 0.6567 | |
|
|
| piqa | vllm | 0.8357 | 0.8324 | 0.8303 | |
|
|
| mmlu_llama | vllm | 0.8336 | 0.8326 | 0.8291 | |
|
|
| gsm8k_llama(strict) | vllm | 0.9492 | 0.9439 | 0.9416 | |
|
|
| average | vllm | 0.8211 | 0.8187 | 0.8144 | |
|
|
|
|
|
|
|
|
## Ethical Considerations and Limitations |
|
|
|
|
|
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. |
|
|
Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. |
|
|
|
|
|
Therefore, before deploying any applications of the model, developers should perform safety testing. |
|
|
|
|
|
## Caveats and Recommendations |
|
|
|
|
|
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. |
|
|
|
|
|
Here are a couple of useful links to learn more about Intel's AI software: |
|
|
|
|
|
- [Intel Neural Compressor](https://github.com/intel/neural-compressor) |
|
|
- [AutoRound](https://github.com/intel/auto-round) |
|
|
|
|
|
## Disclaimer |
|
|
|
|
|
The license on this model does not constitute legal advice. |
|
|
We are not responsible for the actions of third parties who use this model. |
|
|
Please consult an attorney before using this model for commercial purposes. |
|
|
|