File size: 1,757 Bytes
5a24d57
 
 
cb1490c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---

license: apache-2.0
---


# Magistral-Small-2506-FP8-dynamic

Quantized version of [Magistral-Small-2506](https://huggingface.co/mistralai/Magistral-Small-2506).

## Creation

This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet
below.

```python

import argparse

import os



from llmcompressor.modifiers.quantization import QuantizationModifier

from llmcompressor.transformers import oneshot

from transformers import AutoModelForCausalLM





def main():

    parser = argparse.ArgumentParser(description='Quantize a transformer model to FP8')

    parser.add_argument('--model_id', type=str, required=True,

                        help='The model ID from HuggingFace (e.g., "meta-llama/Meta-Llama-3-8B-Instruct")')

    parser.add_argument('--save_path', type=str, default='.',

                        help='Custom path to save the quantized model. If not provided, will use model_name-FP8-dynamic')

    args = parser.parse_args()



    # Load model

    model = AutoModelForCausalLM.from_pretrained(

        args.model_id, device_map="auto", torch_dtype="auto", trust_remote_code=True,

    )



    # Configure the quantization algorithm and scheme

    recipe = QuantizationModifier(

        targets="Linear", scheme="FP8_DYNAMIC", ignore=["lm_head"]

    )



    # Apply quantization

    oneshot(model=model, recipe=recipe)



    save_path = os.path.join(args.save_path, args.model_id.split("/")[1] + "-FP8-dynamic")

    os.makedirs(save_path, exist_ok=True)



    # Save to disk in compressed-tensors format

    model.save_pretrained(save_path)

    print(f"Model and tokenizer saved to: {save_path}")



if __name__ == "__main__":

    main()

```