Update README.md
Browse files
README.md
CHANGED
|
@@ -1,47 +1,89 @@
|
|
| 1 |
---
|
| 2 |
-
|
| 3 |
-
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
|
|
|
| 7 |
---
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
-
|
| 47 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
language:
|
| 3 |
+
- tr
|
| 4 |
+
arXiv: 2403.01308
|
| 5 |
+
library_name: transformers
|
| 6 |
+
pipeline_tag: text2text-generation
|
| 7 |
+
license: cc-by-nc-sa-4.0
|
| 8 |
---
|
| 9 |
+
# VBART Model Card
|
| 10 |
+
|
| 11 |
+
## Model Description
|
| 12 |
+
|
| 13 |
+
VBART is the first sequence-to-sequence LLM pre-trained on Turkish corpora from scratch on a large scale. It was pre-trained by VNGRS in February 2023.
|
| 14 |
+
The model is capable of conditional text generation tasks such as text summarization, paraphrasing, and title generation when fine-tuned.
|
| 15 |
+
It outperforms its multilingual counterparts, albeit being much smaller than other implementations.
|
| 16 |
+
|
| 17 |
+
VBART-XLarge is created by adding extra Transformer layers between the layers of VBART-Large. Hence it was able to transfer learned weights from the smaller model while doublings its number of layers.
|
| 18 |
+
VBART-XLarge improves the results compared to VBART-Large albeit in small margins.
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
This repository contains fine-tuned TensorFlow and Safetensors weights of VBART for text summarization task.
|
| 22 |
+
|
| 23 |
+
- **Developed by:** [VNGRS-AI](https://vngrs.com/ai/)
|
| 24 |
+
- **Model type:** Transformer encoder-decoder based on mBART architecture
|
| 25 |
+
- **Language(s) (NLP):** Turkish
|
| 26 |
+
- **License:** CC BY-NC-SA 4.0
|
| 27 |
+
- **Finetuned from:** VBART-XLarge
|
| 28 |
+
- **Paper:** [arXiv](https://arxiv.org/abs/2403.01308)
|
| 29 |
+
## How to Get Started with the Model
|
| 30 |
+
```python
|
| 31 |
+
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
| 32 |
+
|
| 33 |
+
tokenizer = AutoTokenizer.from_pretrained("vngrs-ai/VBART-XLarge-Summarization",
|
| 34 |
+
model_input_names=['input_ids', 'attention_mask'])
|
| 35 |
+
# Uncomment the device_map kwarg and delete the closing bracket to use model for inference on GPU
|
| 36 |
+
model = AutoModelForSeq2SeqLM.from_pretrained("vngrs-ai/VBART-XLarge-Summarization")#, device_map="auto")
|
| 37 |
+
|
| 38 |
+
input_text="..."
|
| 39 |
+
|
| 40 |
+
token_input = tokenizer(input_text, return_tensors="pt")#.to('cuda')
|
| 41 |
+
outputs = model.generate(**token_input)
|
| 42 |
+
print(tokenizer.decode(outputs[0]))
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
## Training Details
|
| 46 |
+
### Training Data
|
| 47 |
+
The base model is pre-trained on [vngrs-web-corpus](https://huggingface.co/datasets/vngrs-ai/vngrs-web-corpus). It is curated by cleaning and filtering Turkish parts of [OSCAR-2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201) and [mC4](https://huggingface.co/datasets/mc4) datasets. These datasets consist of documents of unstructured web crawl data. More information about the dataset can be found on their respective pages. Data is filtered using a set of heuristics and certain rules, explained in the appendix of our [paper](https://arxiv.org/abs/2403.01308).
|
| 48 |
+
|
| 49 |
+
The fine-tuning dataset is the Turkish sections of [MLSum](https://huggingface.co/datasets/mlsum), [TRNews](https://huggingface.co/datasets/batubayk/TR-News), [XLSum](https://huggingface.co/datasets/csebuetnlp/xlsum) and [Wikilingua](https://huggingface.co/datasets/wiki_lingua) datasets.
|
| 50 |
+
|
| 51 |
+
### Limitations
|
| 52 |
+
This model is fine-tuned for paraphrasing tasks. It is not intended to be used in any other case and can not be fine-tuned to any other task with full performance of the base model. It is also not guaranteed that this model will work without specified prompts.
|
| 53 |
+
|
| 54 |
+
### Training Procedure
|
| 55 |
+
Pre-trained for 30 days and for a total of 708B tokens. Finetuned for 20 epoch.
|
| 56 |
+
#### Hardware
|
| 57 |
+
- **GPUs**: 8 x Nvidia A100-80 GB
|
| 58 |
+
#### Software
|
| 59 |
+
- TensorFlow
|
| 60 |
+
#### Hyperparameters
|
| 61 |
+
##### Pretraining
|
| 62 |
+
- **Training regime:** fp16 mixed precision
|
| 63 |
+
- **Training objective**: Sentence permutation and span masking (using mask lengths sampled from Poisson distribution 位=3.5, masking 30% of tokens)
|
| 64 |
+
- **Optimizer** : Adam optimizer (尾1 = 0.9, 尾2 = 0.98, 茞 = 1e-6)
|
| 65 |
+
- **Scheduler**: Custom scheduler from the original Transformers paper (20,000 warm-up steps)
|
| 66 |
+
- **Dropout**: 0.1 (dropped to 0.05 and then to 0 in the last 165k and 205k steps, respectively)
|
| 67 |
+
- **Initial Learning rate**: 5e-6
|
| 68 |
+
- **Training tokens**: 708B
|
| 69 |
+
|
| 70 |
+
##### Fine-tuning
|
| 71 |
+
- **Training regime:** fp16 mixed precision
|
| 72 |
+
- **Optimizer** : Adam optimizer (尾1 = 0.9, 尾2 = 0.98, 茞 = 1e-6)
|
| 73 |
+
- **Scheduler**: Linear decay scheduler
|
| 74 |
+
- **Dropout**: 0.1
|
| 75 |
+
- **Learning rate**: 1e-5
|
| 76 |
+
- **Fine-tune epochs**: 20
|
| 77 |
+
|
| 78 |
+
#### Metrics
|
| 79 |
+

|
| 80 |
+
|
| 81 |
+
## Citation
|
| 82 |
+
```
|
| 83 |
+
@article{turker2024vbart,
|
| 84 |
+
title={VBART: The Turkish LLM},
|
| 85 |
+
author={Turker, Meliksah and Ari, Erdi and Han, Aydin},
|
| 86 |
+
journal={arXiv preprint arXiv:2403.01308},
|
| 87 |
+
year={2024}
|
| 88 |
+
}
|
| 89 |
+
```
|