Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,90 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
datasets:
|
| 4 |
+
- andreabac3/Quora-Italian-Fauno-Baize
|
| 5 |
+
- andreabac3/StackOverflow-Italian-Fauno-Baize
|
| 6 |
+
- andreabac3/MedQuaAD-Italian-Fauno-Baize
|
| 7 |
+
language:
|
| 8 |
+
- it
|
| 9 |
+
- en
|
| 10 |
+
pipeline_tag: text-generation
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
+
# cerbero-7b Italian LLM ๐
|
| 14 |
+
|
| 15 |
+
> ๐ข **cerbero-7b** is an **Italian Large Language Model** (LLM) with a large context length of **8192 tokens** which excels in linguistic benchmarks.
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
<p align="center">
|
| 19 |
+
<img width="300" height="300" src="./README.md.d/cerbero.png">
|
| 20 |
+
</p>
|
| 21 |
+
|
| 22 |
+
Built on **mistral-7b**, which outperforms Llama2 13B across all benchmarks and surpasses Llama1 34B in numerous metrics.
|
| 23 |
+
|
| 24 |
+
**cerbero-7b** is specifically crafted to fill the void in Italy's AI landscape.
|
| 25 |
+
|
| 26 |
+
A **cambrian explosion** of **Italian Language Models** is essential for building advanced AI architectures that can cater to the diverse needs of the population.
|
| 27 |
+
|
| 28 |
+
**cerbero-7b**, alongside companions like [**Camoscio**](https://github.com/teelinsan/camoscio) and [**Fauno**](https://github.com/RSTLess-research/Fauno-Italian-LLM), aims to kick-start this revolution in Italy, ushering in an era where sophisticated **AI solutions** can seamlessly interact with and understand the intricacies of the **Italian language**, thereby empowering **innovation** across **industries** and fostering a deeper **connection** between **technology** and the **people** it serves.
|
| 29 |
+
|
| 30 |
+
**cerbero-7b** is released under the **permissive** Apache 2.0 **license**, allowing **unrestricted usage**, even **for commercial applications**.
|
| 31 |
+
|
| 32 |
+
## Why Cerbero? ๐ค
|
| 33 |
+
|
| 34 |
+
The name "Cerbero," inspired by the three-headed dog that guards the gates of the Underworld in Greek mythology, encapsulates the essence of our model, drawing strength from three foundational pillars:
|
| 35 |
+
|
| 36 |
+
- **Base Model: mistral-7b** ๐๏ธ
|
| 37 |
+
cerbero-7b builds upon the formidable **mistral-7b** as its base model. This choice ensures a robust foundation, leveraging the power and capabilities of a cutting-edge language model.
|
| 38 |
+
|
| 39 |
+
- **Datasets: Fauno Dataset** ๐
|
| 40 |
+
Utilizing the comprehensive **fauno dataset**, cerbero-7b gains a diverse and rich understanding of the Italian language. The incorporation of varied data sources contributes to its versatility in handling a wide array of tasks.
|
| 41 |
+
|
| 42 |
+
- **Licensing: Apache 2.0** ๐๏ธ
|
| 43 |
+
Released under the **permissive Apache 2.0 license**, cerbero-7b promotes openness and collaboration. This licensing choice empowers developers with the freedom for unrestricted usage, fostering a community-driven approach to advancing AI in Italy and beyond.
|
| 44 |
+
|
| 45 |
+
## Training Details ๐
|
| 46 |
+
|
| 47 |
+
cerbero-7b is **fully fine-tuned**, distinguishing itself from LORA or QLORA fine-tunes.
|
| 48 |
+
The model is trained on an expansive Italian Large Language Model (LLM) using synthetic datasets generated through dynamic self-chat.
|
| 49 |
+
|
| 50 |
+
### Dataset Composition ๐
|
| 51 |
+
|
| 52 |
+
We employed the [Fauno training dataset](https://github.com/RSTLess-research/Fauno-Italian-LLM). The training data covers a broad spectrum, incorporating:
|
| 53 |
+
|
| 54 |
+
- **Medical Data:** Capturing nuances in medical language. ๐ฉบ
|
| 55 |
+
- **Technical Content:** Extracted from Stack Overflow to enhance the model's understanding of technical discourse. ๐ป
|
| 56 |
+
- **Quora Discussions:** Providing valuable insights into common queries and language usage. โ
|
| 57 |
+
- **Alpaca Data Translation:** Italian-translated content from Alpaca contributes to the model's language richness and contextual understanding. ๐ฆ
|
| 58 |
+
|
| 59 |
+
### Training Setup โ๏ธ
|
| 60 |
+
|
| 61 |
+
cerbero-7b is trained on an NVIDIA DGX H100:
|
| 62 |
+
|
| 63 |
+
- **Hardware:** Utilizing 8xH100 GPUs, each with 80 GB VRAM. ๐ฅ๏ธ
|
| 64 |
+
- **Parallelism:** DeepSpeed Zero stage 1 parallelism for optimal training efficiency.โจ
|
| 65 |
+
|
| 66 |
+
The model has been trained for **3 epochs**, ensuring a convergence of knowledge and proficiency in handling diverse linguistic tasks.
|
| 67 |
+
|
| 68 |
+
## Getting Started ๐
|
| 69 |
+
|
| 70 |
+
You can load cerbero-7b using [๐คtransformers](https://huggingface.co/docs/transformers/index)
|
| 71 |
+
|
| 72 |
+
|
| 73 |
+
```python
|
| 74 |
+
import torch
|
| 75 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 76 |
+
|
| 77 |
+
model = AutoModelForCausalLM.from_pretrained("galatolo/cerbero-7b")
|
| 78 |
+
tokenizer = AutoTokenizer.from_pretrained("galatolo/cerbero-7b")
|
| 79 |
+
|
| 80 |
+
prompt = """Questa รจ una conversazione tra un umano ed un assistente AI.
|
| 81 |
+
[|Umano|] Come posso distinguere un AI da un umano?
|
| 82 |
+
[|AI|]"""
|
| 83 |
+
|
| 84 |
+
input_ids = tokenizer(prompt, return_tensors='pt').input_ids
|
| 85 |
+
with torch.no_grad():
|
| 86 |
+
output_ids = model.generate(input_ids, max_new_tokens=1024)
|
| 87 |
+
|
| 88 |
+
generated_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
|
| 89 |
+
print(generated_text)
|
| 90 |
+
```
|