Introduction

This is a fine-tuned LM in our paper below and the related GitHub repo is here.

Human-in-the-Loop Generation of Adversarial Texts: A Case Study on Tibetan Script (Cao et al., IJCNLP-AACL 2025 Demo)

Citation

If you think our work useful, please kindly cite our paper.

@inproceedings{cao-etal-2025-human,
    title = "Human-in-the-Loop Generation of Adversarial Texts: A Case Study on {T}ibetan Script",
    author = "Cao, Xi  and
      Sun, Yuan  and
      Li, Jiajun  and
      Gesang, Quzong  and
      Qun, Nuo  and
      Tashi, Nyima",
    editor = "Liu, Xuebo  and
      Purwarianti, Ayu",
    booktitle = "Proceedings of The 14th International Joint Conference on Natural Language Processing and The 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics: System Demonstrations",
    month = dec,
    year = "2025",
    address = "Mumbai, India",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.ijcnlp-demo.2/",
    pages = "9--16",
    ISBN = "979-8-89176-301-2"
}
Downloads last month
7
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including UTibetNLP/tibetan-bert_tncc-document_tsheg