be-tiny-bart / README.md
djulian13's picture
Update README.md
ca8e9df verified
|
raw
history blame
5.99 kB
# be-tiny-bart
A model for lemmatisation of Belarusian, trained on [Belarusian-HSE](https://github.com/UniversalDependencies/UD_Belarusian-HSE/tree/master) dataset.
## Model Details
### Model Description
- **Developed by:** Ilia Afanasev
- **Model type:** BART
- **Language(s) (NLP):** Belarusian
- **License:** mpl-2.0
- **Finetuned from model:** sshleifer/bart-tiny-random
### Model Sources
- **Paper:** TBP
## Uses
Sequence-to-sequence transformation.
### Direct Use
The system was fine-tuned for lemmatisation of Modern Standard Belarusian.
### Out-of-Scope Use
Downstream use and further fine-tuning (for instance, for text-to-SQL transformation) seem to be
## Bias, Risks, and Limitations
The model is fine-tuned only for Modern Standard Belarusian on a rather small Belarusian-HSE dataset. Use its results only after the manual check.
[More Information Needed]
### Recommendations
Use this model only for lemmatisation of Modern Standard Belarusian if you aspire for the reliable silver tagging results. Any kind of regional, territorial or social variation is going to lead to the catastrophic forgetting issues.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
[Belarusian-HSE](https://github.com/UniversalDependencies/UD_Belarusian-HSE/tree/master)
### Training Procedure
Virtual environment:
- Python 3.10.12
- Transformers 4.34.0
- sentence-splitter==1.4
- simpletransformers==0.64.3
- stanza==1.8.1
- torch==2.1.0
The script:
```
import logging
import pandas as pd
from simpletransformers.seq2seq import Seq2SeqModel
import argparse
import torch
import random
def load_conllu_dataset(datafile):
arr = []
with open(datafile, encoding='utf-8') as inp:
strings = inp.readlines()
for s in strings:
if (s[0] != "#" and s.strip()):
split_string = s.split('\t')
arr.append([split_string[1] + " " + split_string[3]+ " " + split_string[5], split_string[2]])
return pd.DataFrame(arr, columns=["input_text", "target_text"])
def count_matches(labels, preds):
print(labels)
print(preds)
return sum([1 if label == pred else 0 for label, pred in zip(labels, preds)])
def main(args):
train_df = load_conllu_dataset(args.train_data)
args.fraction = float(args.fraction)
print(f'Loading training dataset of {train_df.shape[0]} tokens')
eval_df = load_conllu_dataset(args.dev_data)
random.seed(int(args.seed))
print(f'Setting seed to {args.seed}')
if args.fraction > 0.0 and args.fraction < 1.0:
remainder = int(args.fraction * len(train_df))
train_df = train_df.sample(remainder)
print(f'Subsampling training dataset to {train_df.shape[0]} tokens')
model_args = {
"reprocess_input_data": True,
"overwrite_output_dir": True,
"max_seq_length": max([len(token) for token in train_df["target_text"].tolist()]),
"train_batch_size": int(args.batch),
"num_train_epochs": int(args.epochs),
"save_eval_checkpoints": False,
"save_model_every_epoch": False,
# "silent": True,
"evaluate_generated_text": False,
"evaluate_during_training": False,
"evaluate_during_training_verbose": False,
"use_multiprocessing": False,
"use_multiprocessing_for_evaluation": False,
"save_best_model": False,
"max_length": max([len(token) for token in train_df["input_text"].tolist()]),
"save_steps": -1,
}
model = Seq2SeqModel(
encoder_decoder_type=args.model_type,
encoder_decoder_name=args.model,
args=model_args,
use_cuda = torch.cuda.is_available(),)
model.train_model(train_df, eval_data=eval_df, matches=count_matches)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--train_data')
parser.add_argument('--dev_data')
parser.add_argument('--model_type', default="bart")
parser.add_argument('--model', default="tiny-bart")
parser.add_argument('--epochs', default="2")
parser.add_argument('--batch', default="4")
parser.add_argument('--fraction', help="Fraction of data", default=1.0)
parser.add_argument('--seed', help="random seed", default=1590)
args = parser.parse_args()
main(args)
```
#### Training Hyperparameters
- **Training regime:** fp32
- **Epochs**: 2
- **Batch**: 7
- **Seed**: 1590
#### Speeds, Sizes, Times
The training took around 2.5 hrs on 4 GB GPU (NVIDIA GeForce RTX 3050).
## Evaluation
During the training, no implementation procedures were introduced.
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
- **Hardware Type:** Personal laptop (Xiaomi Mi Notebook Pro X 15)
- **Hours used:** 4h
- **Carbon emitted:** approx. 0.1 kg.
## Technical Specifications [optional]
### Model Architecture and Objective
- Architecture: BART
- Objective: sequence-to-sequence transformation
### Compute Infrastructure
Personal laptop
#### Hardware
- Xiaomi Mi Notebook Pro X 15
#### Software
- VS Code
## Citation
**BibTeX:**
TBP
**APA:**
TBP
## Model Card Authors [optional]
Ilia Afanasev
## Model Card Contact
ilia.afanasev.1997@gmail.com
---
license: mpl-2.0
language:
- be
metrics:
- accuracy
base_model:
- sshleifer/bart-tiny-random
pipeline_tag: translation
tags:
- seq2seq
- lemmatisation
---