File size: 8,180 Bytes
a9f6b6c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5917fd9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4f1f669
5917fd9
 
 
 
 
 
 
 
 
 
 
 
 
a9f6b6c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5917fd9
a9f6b6c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5917fd9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a9f6b6c
5917fd9
 
 
 
 
a9f6b6c
5917fd9
 
 
a9f6b6c
5917fd9
 
 
a9f6b6c
5917fd9
 
 
a9f6b6c
5917fd9
 
 
4f1f669
5917fd9
 
 
 
 
 
 
 
a9f6b6c
5917fd9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a9f6b6c
5917fd9
 
 
 
 
a9f6b6c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
---
license: mpl-2.0
language:
- be
metrics:
- accuracy
base_model:
- sshleifer/bart-tiny-random
pipeline_tag: translation
tags:
- seq2seq
- lemmatisation
library_name: transformers
---

# be-tiny-bart

A model for lemmatisation of Belarusian, trained on [Belarusian-HSE](https://github.com/UniversalDependencies/UD_Belarusian-HSE/tree/master) dataset.

## Model Details

### Model Description

- **Developed by:** Ilia Afanasev
- **Model type:** BART
- **Language(s) (NLP):** Belarusian
- **License:** mpl-2.0
- **Finetuned from model:** sshleifer/bart-tiny-random

### Model Sources

- **Paper:** TBP

## Uses

Sequence-to-sequence transformation.

### Direct Use

The system was fine-tuned for lemmatisation of Modern Standard Belarusian.

### Out-of-Scope Use

Downstream use and further fine-tuning (for instance, for text-to-SQL transformation) seem to be not fruitful: the model has been fine-tuned for a very specific task, which is not scalable to the other types of sequence-to-sequence transformations.

## Bias, Risks, and Limitations

The model is fine-tuned only for Modern Standard Belarusian on a rather small Belarusian-HSE dataset. Use its results only after the manual check.


### Recommendations

Use this model only for lemmatisation of Modern Standard Belarusian if you aspire for the reliable silver tagging results. Any kind of regional, territorial or social variation is going to lead to the catastrophic forgetting issues.


## How to Get Started with the Model

Use the code below to get started with the model. You will need your data in CoNLL-U format.

```
!pip install simpletransformers
!pip install pyjarowinkler
!pip install Levenshtein

import logging
import pandas as pd
from simpletransformers.seq2seq import Seq2SeqModel
import torch
import Levenshtein
from pyjarowinkler import distance as jw
import numpy as np
from itertools import cycle
import json

def load_conllu_dataset(datafile):
    arr = []
    with open(datafile, encoding='utf-8') as inp:
        strings = inp.readlines()
    for s in strings:
      if (s[0] != "#" and s.strip()):
          split_string = s.split('\t')
          arr.append([split_string[1] + " " + split_string[3]+ " " + split_string[5], split_string[2]])                 
    return pd.DataFrame(arr, columns=["input_text", "target_text"])


MODEL_NAME = "djulian13/be-tiny-bart"

logging.basicConfig(level=logging.INFO)
transformers_logger = logging.getLogger("transformers")
transformers_logger.setLevel(logging.WARNING)


model = Seq2SeqModel(
    encoder_decoder_type="bart",
    encoder_decoder_name=MODEL_NAME,
    use_cuda = torch.cuda.is_available()
)

DATA_PRED_NAME = "test.conllu"

predictions = load_conllu_dataset(DATA_PRED_NAME)

pred_data = predictions["input_text"].tolist()

predictions = model.predict(pred_data)

predictions = cycle(predictions)
  with open(DATA_PRED_NAME, encoding='utf-8') as inp:
      strings = inp.readlines()
      predicted = []
      for s in strings:
        if (s[0] != "#" and s.strip()):
            split_string = s.split('\t')
            split_string[2] = next(predictions)
            joined_string = '\t'.join(split_string)
            predicted.append(joined_string)
            continue
        predicted.append(s)      
      with open("result.conllu", 'w', encoding='utf-8') as out:
        out.write(''.join(predicted))

```

## Training Details

### Training Data

[Belarusian-HSE](https://github.com/UniversalDependencies/UD_Belarusian-HSE/tree/master)

### Training Procedure

Virtual environment:

- Python 3.10.12
- Transformers 4.34.0
- sentence-splitter==1.4
- simpletransformers==0.64.3
- stanza==1.8.1
- torch==2.1.0

The script:

```
import logging
import pandas as pd
from simpletransformers.seq2seq import Seq2SeqModel
import argparse
import torch
import random


def load_conllu_dataset(datafile):
    arr = []
    with open(datafile, encoding='utf-8') as inp:
        strings = inp.readlines()
    for s in strings:
      if (s[0] != "#" and s.strip()):
          split_string = s.split('\t')
          arr.append([split_string[1] + " " + split_string[3]+ " " + split_string[5], split_string[2]])    
    return pd.DataFrame(arr, columns=["input_text", "target_text"])

def count_matches(labels, preds):
    print(labels)
    print(preds)
    return sum([1 if label == pred else 0 for label, pred in zip(labels, preds)])

def main(args):
    train_df = load_conllu_dataset(args.train_data)
    args.fraction = float(args.fraction)
    print(f'Loading training dataset of {train_df.shape[0]} tokens')
    eval_df = load_conllu_dataset(args.dev_data)
    random.seed(int(args.seed))
    print(f'Setting seed to {args.seed}')
    if args.fraction > 0.0 and args.fraction < 1.0:
        remainder = int(args.fraction * len(train_df))
        train_df = train_df.sample(remainder)
        print(f'Subsampling training dataset to {train_df.shape[0]} tokens')
    model_args = {
        "reprocess_input_data": True,
        "overwrite_output_dir": True,
        "max_seq_length": max([len(token) for token in train_df["target_text"].tolist()]),
        "train_batch_size": int(args.batch),
        "num_train_epochs": int(args.epochs),
        "save_eval_checkpoints": False,
        "save_model_every_epoch": False,
        # "silent": True,
        "evaluate_generated_text": False,
        "evaluate_during_training": False,
        "evaluate_during_training_verbose": False,
        "use_multiprocessing": False,
        "use_multiprocessing_for_evaluation": False,
        "save_best_model": False,
        "max_length": max([len(token) for token in train_df["input_text"].tolist()]),
        "save_steps": -1,
    }
    model = Seq2SeqModel(
        encoder_decoder_type=args.model_type,
        encoder_decoder_name=args.model, 
        args=model_args,
	use_cuda = torch.cuda.is_available(),)    
    model.train_model(train_df, eval_data=eval_df, matches=count_matches)
    
if __name__ == '__main__':    
    parser = argparse.ArgumentParser()
    parser.add_argument('--train_data')
    parser.add_argument('--dev_data')
    parser.add_argument('--model_type', default="bart")
    parser.add_argument('--model', default="tiny-bart")
    parser.add_argument('--epochs', default="2")
    parser.add_argument('--batch', default="4")
    parser.add_argument('--fraction', help="Fraction of data", default=1.0)
    parser.add_argument('--seed', help="random seed", default=1590)
    args = parser.parse_args()
    main(args)
```


#### Training Hyperparameters

- **Training regime:**  fp32
- **Epochs**: 2
- **Batch**: 7
- **Seed**: 1590


#### Speeds, Sizes, Times

The training took around 2.5 hrs on 4 GB GPU (NVIDIA GeForce RTX 3050).

## Evaluation

During the training, no evaluation procedures were introduced.

### Testing Data, Factors & Metrics

#### Testing Data

[YABC](https://github.com/poritski/YABC), a freely downloadable corpus of ≈7.5M words of Belarusian newspaper articles and fiction. For the more detailed representation of the dataset, see its page on [Zenodo](https://zenodo.org/records/19349899).

#### Factors

Genre differences: newspaper articles vs. fiction. 

#### Metrics

The evaluation process used accuracy score for the best possible comparison, alongside with the qualitative analysis of the examples.

### Results

When tested out-of-domain, the model often struggles to generate the correct lemma.

#### Summary

Generally, it is possible to use this model for the preliminary tagging of Belarusian. However, if there are better options (for instance, disambiguation of existing multiple tag candidates with LLMs), it is better to go with them.


## Environmental Impact

- **Hardware Type:** Personal laptop (Xiaomi Mi Notebook Pro X 15)
- **Hours used:** 4h
- **Carbon emitted:** approx. 0.1 kg. 

## Technical Specifications

### Model Architecture and Objective

- Architecture: BART
- Objective: sequence-to-sequence transformation

### Compute Infrastructure

Personal laptop

#### Hardware

- Xiaomi Mi Notebook Pro X 15

#### Software

- VS Code

## Citation


**BibTeX:**

TBP

**APA:**

TBP


## Model Card Authors

Ilia Afanasev

## Model Card Contact

ilia.afanasev.1997@gmail.com