| --- |
| library_name: transformers |
| tags: [] |
| --- |
| |
| # Mistral-7B-v0.1-Italian-SAVA-instruct |
| <div align="center"> |
|
|
| <img src="https://github.com/Andrew-Wyn/images/blob/master/sava/italian_adapt-img.jpg?raw=true" width="400" height="400" style="border-radius:10%" /> |
|
|
| </div> |
|
|
| The **Mistral-7B-v0.1-Adapted** collection of large language models (LLMs), is a collection of adapted generative models in 7B (text in/text out), adapted models from **Mistral-7B-Base-v0.1**. |
|
|
| *Mistral-v0.1-Italian-SAVA-instruct* is a continually trained and instruction tuned Mistral model. Which vocabulary was inherited from **Minerva-3B**. |
|
|
| **Model developer:** SapienzaNLP, ISTI-CNR, ILC-CNR |
|
|
| **Model Architecture:** Mistral-7B-v0.1-Adapted is an auto-regressive language model that uses an optimized transformer architecture. |
|
|
| ## Data used for the adaptation |
|
|
| The **Mistral-7B-v0.1-Adapted** models are trained on a collection of Italian and English data extracted from [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX). |
| The data are extracted to be skewed toward Italian language with a ration of one over four. Extracting the first 9B tokens from Italian part of CulturaX and the first 3B tokens from English part of CulturaX. |
|
|
| ## Data used for the instruction tuning (SFT) |
|
|
| The data used in the instruction following training procedure: |
|
|
| | Dataset | Language | Instances | |
| |------|-----|------| |
| | [TÜLU-v3](https://huggingface.co/datasets/allenai/tulu-3-sft-mixture) | EN | 940,000 | |
| | [LIMA](https://huggingface.co/datasets/GAIR/lima) | IT/EN | 2,000 | |
| | [WildChat-IT](https://huggingface.co/datasets/allenai/WildChat-1M) | IT | 5,000 | |
| | [TowerBlocks-v0.2](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2) | IT/EN | 7,276 | |
| | [GPT-4o-ITA-Instruct](https://huggingface.co/datasets/DeepMount00/GPT-4o-ITA-INSTRUCT) | IT | 15,000 | |
| | [Aya](https://huggingface.co/datasets/CohereLabs/aya_dataset) | IT | 700 | |
|
|
| The model is trained for two epoches in the aforementioned data. |
|
|
| ## Evaluation |
|
|
| Adapted models are evaluated on [ITA-Bench](https://github.com/SapienzaNLP/ita-bench). |
|
|
| | Model | MMLU (5-shots) | ARC-C (5-shots) | Hellaswag (0-shots) | IFEval (inst_level) | |
| |------|-----|------|------|------| |
| | Llama-3.1-SAVA | 56.9 | 42.3 | 58.1 | 62.3 | |
| | Llama-3.1-LAPT | 58.5 | 47.9 | 62.4 | 67.3 | |
| | **Mistral-0.1-SAVA** | 51.5 | 41.6 | 57.5 | 61.7 | |
| | Mistral-0.1-LAPT | 52.9 | 39.9 | 58.4 | 60.0 | |
| | Llama-3.1-Original | 47.4 | 43.1 | 57.9 | 66.8 | |
| | Mistral-0.1-Original | 41.6 | 38.9 | 50.0 | 42.2 | |
| |
| ## Use with Transformers |
| |
| You can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function. |
| |
| Make sure to update your transformers installation via `pip install --upgrade transformers`. |
| |
| ```python |
| import transformers |
| import torch |
| |
| model_id = "SemanticAlignment/Mistral-v0.1-Italian-SAVA-instruct" |
|
|
| tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
| generator = pipeline( |
| "text-generation", |
| model=model_name, |
| device_map="auto", |
| dtype=torch.bfloat16 |
| ) |
| |
| conversations.append([ |
| {"role": "system", "content": "Sei un assistente utile, rispondi in modo conciso e coerente."}, |
| {"role": "user", "content": "Cosa si può fare in una bella giornata di sole?"}, |
| ]) |
| |
| chat_samples = tokenizer.apply_chat_template(conversations, tokenize=False) |
| |
| # get number of prompt tokens |
| prompt_tokens_number = len(tokenizer(chat_samples)["input_ids"]) |
| |
| outputs = generator( |
| conversations, |
| max_new_tokens=2048, |
| eos_token_id=[ |
| tokenizer.eos_token_id, |
| tokenizer.convert_tokens_to_ids("<|eot_id|>"), |
| ], |
| ) |
| |
| ``` |
| |
| Code: https://github.com/SapienzaNLP/sava |
| |
| ## Aknowledgement |
| Thanks to Leonardo Colosi (colosi@diag.uniroma1.it) for helping in instruction tuning phase. |
| |
| We acknowledge ISCRA for awarding this project access to the LEONARDO supercomputer, owned by the EuroHPC Joint Undertaking, hosted by CINECA (Italy). |
| |
| ## Citation |
| |
| If you use any part of this work, please consider citing the paper as follows: |
| |
| ```bibtex |
| @misc{moroni2025optimizingllmsitalianreducing, |
| title={Optimizing LLMs for Italian: Reducing Token Fertility and Enhancing Efficiency Through Vocabulary Adaptation}, |
| author={Luca Moroni and Giovanni Puccetti and Pere-Lluis Huguet Cabot and Andrei Stefan Bejgu and Edoardo Barba and Alessio Miaschi and Felice Dell'Orletta and Andrea Esuli and Roberto Navigli}, |
| year={2025}, |
| eprint={2504.17025}, |
| archivePrefix={arXiv}, |
| primaryClass={cs.CL}, |
| url={https://arxiv.org/abs/2504.17025}, |
| } |
| ``` |