Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,93 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: gemma
|
| 3 |
+
language:
|
| 4 |
+
- sl
|
| 5 |
+
- en
|
| 6 |
+
- hr
|
| 7 |
+
- sr
|
| 8 |
+
- bs
|
| 9 |
+
base_model:
|
| 10 |
+
- cjvt/GaMS-9B-Instruct
|
| 11 |
+
pipeline_tag: text-generation
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
# Model Card for GaMS-DPO-Translator
|
| 15 |
+
|
| 16 |
+
GaMS-9B-Instruct-DPO-Translator is a fine-tuned version of GaMS-9B-Instruct. Direct Preference Optimization (DPO) was performed on the original model. The learning dataset was synthetially generated by using GaMS-9B-SFT-Translator and EuroLLM-9B-Instruct.
|
| 17 |
+
|
| 18 |
+

|
| 19 |
+
|
| 20 |
+
## Basic information
|
| 21 |
+
|
| 22 |
+
- **Developed by:** team of researchers at the University of Ljubljana, Faculty for Computer and Information Science. Team members: Dario Vajda, Domen Vreš and Marko Robnik-Šikonja.
|
| 23 |
+
- **Languages:** Slovene, English (primary), Croatian, Bosnian and Serbian (secondary). The model might also work for other languages supported by Gemma 2, even though it was not continually pretrained on them.
|
| 24 |
+
- **Base model:** [cjvt/GaMS-9B-Instruct](https://huggingface.co/cjvt/GaMS-9B-Instruct)
|
| 25 |
+
- **License:** [Gemma](https://ai.google.dev/gemma/terms)
|
| 26 |
+
|
| 27 |
+
## Usage
|
| 28 |
+
|
| 29 |
+
The model can be run through `pipeline` API using the following code:
|
| 30 |
+
|
| 31 |
+
```python
|
| 32 |
+
from transformers import pipeline
|
| 33 |
+
|
| 34 |
+
model_id = "GaMS-Beta/GaMS-9B-Instruct-DPO-Translator"
|
| 35 |
+
|
| 36 |
+
pline = pipeline(
|
| 37 |
+
"text-generation",
|
| 38 |
+
model=model_id,
|
| 39 |
+
device_map="cuda" # replace with "mps" to run on a Mac device
|
| 40 |
+
)
|
| 41 |
+
|
| 42 |
+
# Example of response generation
|
| 43 |
+
message = [{"role": "user", "content": "Prevedi naslednje angleško besedilo v slovenščino.\nToday is a nice day."}]
|
| 44 |
+
response = pline(message, max_new_tokens=512)
|
| 45 |
+
print("Translation:", response[0]["generated_text"][-1]["content"])
|
| 46 |
+
```
|
| 47 |
+
|
| 48 |
+
For multi GPU inference set the `device_map` to `auto`:
|
| 49 |
+
|
| 50 |
+
```python
|
| 51 |
+
from transformers import pipeline
|
| 52 |
+
|
| 53 |
+
model_id = "GaMS-Beta/GaMS-9B-Instruct-DPO-Translator"
|
| 54 |
+
|
| 55 |
+
pline = pipeline(
|
| 56 |
+
"text-generation",
|
| 57 |
+
model=model_id,
|
| 58 |
+
device_map="auto"
|
| 59 |
+
)
|
| 60 |
+
|
| 61 |
+
# Example of response generation
|
| 62 |
+
message = [{"role": "user", "content": "Prevedi naslednje angleško besedilo v slovenščino.\nToday is a nice day."}]
|
| 63 |
+
response = pline(message, max_new_tokens=512)
|
| 64 |
+
print("Model's response:", response[0]["generated_text"][-1]["content"])
|
| 65 |
+
|
| 66 |
+
# Example of conversation chain
|
| 67 |
+
new_message = response[0]["generated_text"]
|
| 68 |
+
new_message.append({"role": "user", "content": "Lahko bolj podrobno opišeš ta dogodek?"})
|
| 69 |
+
response = pline(new_message, max_new_tokens=1024)
|
| 70 |
+
print("Model's response:", response[0]["generated_text"][-1]["content"])
|
| 71 |
+
```
|
| 72 |
+
|
| 73 |
+
## Data
|
| 74 |
+
|
| 75 |
+
Data for fine-tuning the original model was acquired by translating a large corpora of wikipedia articles, ccnews articles, bookcorpus texts and english conversational datasets by two models(GaMS-9B-SFT-Translator and EuroLLM-9B-Instruct) which were then ranked by some automatic metrics for translation quality and reliability.
|
| 76 |
+
|
| 77 |
+
## Training
|
| 78 |
+
|
| 79 |
+
The model was trained on the [Vega HPC](https://izum.si/vega_slv/)
|
| 80 |
+
|
| 81 |
+
## Evaluation
|
| 82 |
+
|
| 83 |
+
The model was evaluated by our custom script on three types of data. The results are show in the following table.
|
| 84 |
+
|
| 85 |
+
| Model | Overall Comet | ccnews | nemotron | wikipedia | Bad Lang (%) | Short (%) | Bad Markdown (%) |
|
| 86 |
+
| --- | --- | --- | --- | --- | --- | --- | --- |
|
| 87 |
+
| gemini-2.5-flash | **0.717982** | 0.702981 | **0.697498** | **0.753924** | **0.35%** | 0.42% | **3.70%** |
|
| 88 |
+
| GaMS-9B-Instruct-DPO-Translator | 0.714729 | **0.708317** | 0.689316 | 0.746768 | 1.88% | 1.56% | 13.22% |
|
| 89 |
+
| GaMS-9B-SFT-Translator-DPO | 0.708042 | 0.702903 | 0.679462 | 0.742583 | 0.91% | **0.28%** | 18.28% |
|
| 90 |
+
| GaMS-27B-Instruct | 0.701284 | 0.686480 | 0.680014 | 0.730733 | 27.28% | 5.36% | 62.07% |
|
| 91 |
+
| GaMS-9B-Instruct | 0.693659 | 0.685006 | 0.673394 | 0.723470 | 13.50% | 4.83% | 33.15% |
|
| 92 |
+
| EuroLLM-9B-Instruct | 0.689321 | 0.668084 | 0.670723 | 0.729227 | 8.97% | 1.89% | 35.08% |
|
| 93 |
+
| GaMS-9B-SFT-Translator | 0.682467 | 0.676580 | 0.673650 | 0.699602 | 5.14% | 1.48% | 30.53% |
|