File size: 2,904 Bytes
535ee65
2841f8b
 
 
 
 
 
 
14250da
535ee65
2841f8b
 
 
 
 
 
d5ef43e
2841f8b
 
 
d5ef43e
 
 
 
 
 
 
 
 
 
 
 
 
2841f8b
bd1a69a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b7954c7
bd1a69a
 
 
2841f8b
 
d5ef43e
2841f8b
 
 
d5ef43e
2841f8b
 
 
 
 
 
c51fc5d
2841f8b
 
 
 
 
 
 
c51fc5d
 
2841f8b
 
c51fc5d
 
d5ef43e
c51fc5d
2841f8b
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
---
library_name: peft
tags:
- generated_from_trainer
base_model: Davlan/mT5_base_yoruba_adr
model-index:
- name: yoruba-diacritics-quantized
  results: []
pipeline_tag: text2text-generation
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# yoruba-diacritics-quantized

This model is a fine-tuned version of [Davlan/mT5_base_yoruba_adr](https://huggingface.co/Davlan/mT5_base_yoruba_adr) on a version of [Niger-Volta-LTI](https://github.com/Niger-Volta-LTI/yoruba-adr), provided by Bunmie-e on [huggingface](https://huggingface.co/datasets/bumie-e/Yoruba-diacritics-vs-non-diacritics).

## Model description

The fine-tuning was performed using the PEFT-LoRa technique, aiming to improve the model's performance on tasks like diacritization restoration and generation.

## Key Features:

- **Base model:** `mT5_base_yoruba_adr` pre-trained on Yoruba text
- **Fine-tuned dataset:** Yoruba diacritics dataset from `bumie-e/Yoruba-diacritics-vs-non-diacritics`
- **Fine-tuning technique:** PEFT-LoRa

## Potential Applications:

- Diacritization restoration in Yoruba text
- Generation of Yoruba text with correct diacritics
- Natural language processing tasks for Yoruba language

## Code for Testing:

```python
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

config = PeftConfig.from_pretrained("Professor/yoruba-diacritics-quantized")
model = AutoModelForSeq2SeqLM.from_pretrained("Davlan/mT5_base_yoruba_adr")
model = PeftModel.from_pretrained(model, "Professor/yoruba-diacritics-quantized")
tokenizer = AutoTokenizer.from_pretrained("Davlan/mT5_base_yoruba_adr")

inputs = tokenizer(
    "Mo ti so fun bobo yen sha, aaro la wa bayi",
    return_tensors="pt",
)

device = "cpu" # use your GPU if you have

model.to(device)

with torch.no_grad():
    inputs = {k: v.to(device) for k, v in inputs.items()}
    outputs = model.generate(input_ids=inputs["input_ids"], max_new_tokens=100)
    print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True))
```

## Intended uses & limitations

More information coming

## Training and evaluation data

More information coming 

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
- mixed_precision_training: Native AMP

### Training results

coming soon.

### Framework versions

- PEFT 0.7.2.dev0
- Transformers 4.38.0.dev0
- Pytorch 2.0.0
- Datasets 2.16.1
- Tokenizers 0.15.0