--- language: - tr arXiv: 2403.01308 library_name: transformers license: cc-by-nc-sa-4.0 datasets: - vngrs-ai/vngrs-web-corpus --- # VBART Model Card ## Model Description VBART is the first sequence-to-sequence LLM pre-trained on Turkish corpora from scratch on a large scale. It was pre-trained by VNGRS in February 2023. The model is capable of conditional text generation tasks such as text summarization, paraphrasing, and title generation when fine-tuned. It outperforms its multilingual counterparts, albeit being much smaller than other implementations. It comes in two sizes: - **VBART-Large**: 387M parameters - **VBART-XLarge**: 740M parameters VBART-XLarge is created by adding extra Transformer layers between the layers of VBART-Large. Hence it was able to transfer learned weights from the smaller model while doublings its number of layers. VBART-XLarge improves the results compared to VBART-Large albeit in small margins. - **Developed by:** [VNGRS-AI](https://vngrs.com/ai/) - **Model type:** Transformer encoder-decoder based on mBART architecture - **Language(s) (NLP):** Turkish - **License:** CC BY-NC-SA 4.0 - **Paper:** [arXiv](https://arxiv.org/abs/2403.01308) ### Pre-training Data The base model is pre-trained on [vngrs-web-corpus](https://huggingface.co/datasets/vngrs-ai/vngrs-web-corpus). It is curated by cleaning and filtering Turkish parts of [OSCAR-2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201) and [mC4](https://huggingface.co/datasets/mc4) datasets. These datasets consist of documents of unstructured web crawl data. More information about the dataset can be found on their respective pages. Data is filtered using a set of heuristics and certain rules, explained in the appendix of our [paper](https://arxiv.org/abs/2403.01308). #### Software - TensorFlow #### Pre-training Setting - **Duration**: Pre-trained for 30 days. - **GPUs**: 8 x Nvidia A100-80 GB - **Training tokens**: 708B - **Context Length**: 1024 for both encoder and decoder - **Training regime:** fp16 mixed precision - **Training objective**: Sentence permutation and span masking (using mask lengths sampled from Poisson distribution λ=3.5, masking 30% of tokens) - **Optimizer** : Adam optimizer (β1 = 0.9, β2 = 0.98, Ɛ = 1e-6) - **Scheduler**: Custom scheduler from the original Transformers paper (20,000 warm-up steps) - **Dropout**: 0.1 (dropped to 0.05 and then to 0 in the last 165K and 205k steps, respectively) - **Initial Learning rate**: 5e-6 ## Citation ``` @article{turker2024vbart, title={VBART: The Turkish LLM}, author={Turker, Meliksah and Ari, Erdi and Han, Aydin}, journal={arXiv preprint arXiv:2403.01308}, year={2024} } ```