|
|
--- |
|
|
license: llama3.1 |
|
|
base_model: |
|
|
- meta-llama/Llama-3.1-70B-Instruct |
|
|
--- |
|
|
|
|
|
## Model Details |
|
|
|
|
|
This model card is for mxfp8/nvfp4 quantization of [meta-llama/Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct) based on [intel/auto-round](https://github.com/intel/auto-round). |
|
|
The models are not able to be published due to license limitation. Please follow the INC example README to generate and evaluate the low precision models. |
|
|
|
|
|
## How to Use |
|
|
|
|
|
The step-by-step README of quantization and evaluation can be found in [Intel Neural Compressor Examples](https://github.com/intel/neural-compressor/blob/master/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/auto_round/llama3/README.md). |
|
|
|
|
|
## Evaluate Results |
|
|
|
|
|
| Task | backend | BF16 | MXFP8 | NVFP4 | |
|
|
|:-------------------:|:-------:|:------:|:------:|:------:| |
|
|
| hellaswag | vllm | 0.6609 | 0.6612 | 0.6547 | |
|
|
| piqa | vllm | 0.8357 | 0.8379 | 0.8303 | |
|
|
| mmlu_llama | vllm | 0.8388 | 0.8367 | 0.8311 | |
|
|
| gsm8k_llama(strict) | vllm | 0.9522 | 0.9500 | 0.9401 | |
|
|
| average | vllm | 0.8219 | 0.8215 | 0.8141 | |
|
|
|
|
|
|
|
|
## Ethical Considerations and Limitations |
|
|
|
|
|
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. |
|
|
Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. |
|
|
|
|
|
Therefore, before deploying any applications of the model, developers should perform safety testing. |
|
|
|
|
|
## Caveats and Recommendations |
|
|
|
|
|
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. |
|
|
|
|
|
Here are a couple of useful links to learn more about Intel's AI software: |
|
|
|
|
|
- [Intel Neural Compressor](https://github.com/intel/neural-compressor) |
|
|
- [AutoRound](https://github.com/intel/auto-round) |
|
|
|
|
|
## Disclaimer |
|
|
|
|
|
The license on this model does not constitute legal advice. |
|
|
We are not responsible for the actions of third parties who use this model. |
|
|
Please consult an attorney before using this model for commercial purposes. |