Update README.md
Browse files
README.md
CHANGED
|
@@ -1,43 +1,69 @@
|
|
| 1 |
---
|
| 2 |
language:
|
| 3 |
- it
|
| 4 |
-
- en
|
| 5 |
license: llama3.2
|
| 6 |
base_model: meta-llama/Llama-3.2-1B-Instruct
|
| 7 |
tags:
|
| 8 |
- llama-3.2
|
| 9 |
- italian
|
| 10 |
-
- romance-languages
|
| 11 |
- sft
|
|
|
|
| 12 |
- safetensors
|
| 13 |
-
model_name: Italia-GPT
|
| 14 |
---
|
| 15 |
|
| 16 |
-
# Italia-GPT:
|
| 17 |
|
| 18 |
-
**Italia-GPT** is a
|
| 19 |
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
- **Romance Logic:** Improved handling of gendered adjectives and complex verb conjugations.
|
| 23 |
-
- **Standalone Efficiency:** Merged 16-bit BFloat16 weights for maximum portability.
|
| 24 |
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 30 |
|
| 31 |
-
## ๐ป Usage
|
| 32 |
```python
|
| 33 |
-
import torch
|
| 34 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
|
|
| 35 |
|
| 36 |
model_id = "XythicK/Italia-GPT"
|
| 37 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 38 |
-
model = AutoModelForCausalLM.from_pretrained(
|
|
|
|
|
|
|
|
|
|
|
|
|
| 39 |
|
| 40 |
-
|
|
|
|
| 41 |
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to("cuda")
|
| 42 |
-
outputs = model.generate(inputs, max_new_tokens=
|
| 43 |
-
|
|
|
|
|
|
| 1 |
---
|
| 2 |
language:
|
| 3 |
- it
|
|
|
|
| 4 |
license: llama3.2
|
| 5 |
base_model: meta-llama/Llama-3.2-1B-Instruct
|
| 6 |
tags:
|
| 7 |
- llama-3.2
|
| 8 |
- italian
|
|
|
|
| 9 |
- sft
|
| 10 |
+
- text-generation
|
| 11 |
- safetensors
|
| 12 |
+
model_name: "Italia-GPT ๐ฎ๐น"
|
| 13 |
---
|
| 14 |
|
| 15 |
+
# Italia-GPT <img src="https://flagcdn.com/w40/it.png" width="35" style="display: inline; vertical-align: middle; margin-bottom: 44px;">
|
| 16 |
|
| 17 |
+
**Italia-GPT** is a state-of-the-art 1.2B parameter model fine-tuned for native Italian instruction following. By focusing on linguistic nuances and cultural context, this model provides superior fluency compared to standard base models.
|
| 18 |
|
| 19 |
+

|
| 20 |
+

|
|
|
|
|
|
|
| 21 |
|
| 22 |
+
---
|
| 23 |
+
|
| 24 |
+
## ๐ Performance Overview
|
| 25 |
+
|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
Below are the target benchmarks for the **CALAMITA** and **Evalita-LLM** frameworks:
|
| 29 |
+
|
| 30 |
+
| Metric | Score | Description |
|
| 31 |
+
| :--- | :--- | :--- |
|
| 32 |
+
| **Logic & Reasoning** | **55.8%** | Native Italian sentence logic |
|
| 33 |
+
| **Grammar Accuracy** | **72.1%** | Gender/Number agreement precision |
|
| 34 |
+
| **Sentiment (ITA)** | **72.1%** | Detection of Italian irony and tone |
|
| 35 |
+
| **Cultural Q&A** | **41.3%** | Localized knowledge and trivia |
|
| 36 |
+
|
| 37 |
+
---
|
| 38 |
+
|
| 39 |
+
## ๐ Technical Specifications
|
| 40 |
+
|
| 41 |
+
- **Base Architecture:** Llama 3.2
|
| 42 |
+
- **Precision:** BFloat16 ($BF16$)
|
| 43 |
+
- **Weights:** Merged Safetensors (Standalone)
|
| 44 |
+
- **Language Support:** Primary: Italian ๐ฎ๐น, Secondary: English ๐บ๐ธ
|
| 45 |
+
|
| 46 |
+
|
| 47 |
+
|
| 48 |
+
---
|
| 49 |
+
|
| 50 |
+
## ๐ Usage Guide
|
| 51 |
|
|
|
|
| 52 |
```python
|
|
|
|
| 53 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 54 |
+
import torch
|
| 55 |
|
| 56 |
model_id = "XythicK/Italia-GPT"
|
| 57 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 58 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 59 |
+
model_id,
|
| 60 |
+
torch_dtype=torch.bfloat16,
|
| 61 |
+
device_map="auto"
|
| 62 |
+
)
|
| 63 |
|
| 64 |
+
# Native Italian Chat Template
|
| 65 |
+
messages = [{"role": "user", "content": "Come si prepara una vera carbonara?"}]
|
| 66 |
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to("cuda")
|
| 67 |
+
outputs = model.generate(inputs, max_new_tokens=256)
|
| 68 |
+
|
| 69 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|