| # <a name="introduction"></a> BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese | |
| The pre-trained model `vinai/bartpho-syllable-base` is the "base" variant of `BARTpho-syllable`, which uses the "base" architecture and pre-training scheme of the sequence-to-sequence denoising model [BART](https://github.com/pytorch/fairseq/tree/main/examples/bart). The general architecture and experimental results of BARTpho can be found in our [paper](https://arxiv.org/abs/2109.09701): | |
| @article{bartpho, | |
| title = {{BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese}}, | |
| author = {Nguyen Luong Tran and Duong Minh Le and Dat Quoc Nguyen}, | |
| journal = {arXiv preprint}, | |
| volume = {arXiv:2109.09701}, | |
| year = {2021} | |
| } | |
| **Please CITE** our paper when BARTpho is used to help produce published results or incorporated into other software. | |
| For further information or requests, please go to [BARTpho's homepage](https://github.com/VinAIResearch/BARTpho)! | |