Transformers
Safetensors
Turkish
mbart
text2text-generation

VBART Model Card

Model Description

VBART is the first sequence-to-sequence LLM pre-trained on Turkish corpora from scratch on a large scale. It was pre-trained by VNGRS in February 2023.
The model is capable of conditional text generation tasks such as text summarization, paraphrasing, and title generation when fine-tuned. It outperforms its multilingual counterparts, albeit being much smaller than other implementations. It comes in two sizes:

  • VBART-Large: 387M parameters
  • VBART-XLarge: 740M parameters

VBART-XLarge is created by adding extra Transformer layers between the layers of VBART-Large. Hence it was able to transfer learned weights from the smaller model while doublings its number of layers. VBART-XLarge improves the results compared to VBART-Large albeit in small margins.

  • Developed by: VNGRS-AI
  • Model type: Transformer encoder-decoder based on mBART architecture
  • Language(s) (NLP): Turkish
  • License: CC BY-NC-SA 4.0
  • Paper: arXiv

Pre-training Data

The base model is pre-trained on vngrs-web-corpus. It is curated by cleaning and filtering Turkish parts of OSCAR-2201 and mC4 datasets. These datasets consist of documents of unstructured web crawl data. More information about the dataset can be found on their respective pages. Data is filtered using a set of heuristics and certain rules, explained in the appendix of our paper.

Software

  • TensorFlow

Pre-training Setting

  • Duration: Pre-trained for 30 days.
  • GPUs: 8 x Nvidia A100-80 GB
  • Training tokens: 708B
  • Context Length: 1024 for both encoder and decoder
  • Training regime: fp16 mixed precision
  • Training objective: Sentence permutation and span masking (using mask lengths sampled from Poisson distribution λ=3.5, masking 30% of tokens)
  • Optimizer : Adam optimizer (β1 = 0.9, β2 = 0.98, Ɛ = 1e-6)
  • Scheduler: Custom scheduler from the original Transformers paper (20,000 warm-up steps)
  • Dropout: 0.1 (dropped to 0.05 and then to 0 in the last 165K and 205k steps, respectively)
  • Initial Learning rate: 5e-6

Citation

@article{turker2024vbart,
  title={VBART: The Turkish LLM},
  author={Turker, Meliksah and Ari, Erdi and Han, Aydin},
  journal={arXiv preprint arXiv:2403.01308},
  year={2024}
}
Downloads last month
14
Safetensors
Model size
0.4B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train vngrs-ai/VBART-Large-Base

Collection including vngrs-ai/VBART-Large-Base