jknafou commited on
Commit
3a6da9d
·
verified ·
1 Parent(s): 794acd9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -7
README.md CHANGED
@@ -56,13 +56,27 @@ print(dataset[0])
56
  # Citation
57
  If you use this corpus, please cite:
58
  ```text
59
- @inproceedings{
60
- knafou2025transbert,
61
- title={Trans{BERT}: A Framework for Synthetic Translation in Domain-Specific Language Modeling},
62
- author={Julien Knafou and Luc Mottin and Ana{\"\i}s Mottaz and Alexandre Flament and Patrick Ruch},
63
- booktitle={The 2025 Conference on Empirical Methods in Natural Language Processing},
64
- year={2025},
65
- url={https://transbert.s3.text-analytics.ch/TransBERT.pdf}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66
  }
67
  ```
68
 
 
56
  # Citation
57
  If you use this corpus, please cite:
58
  ```text
59
+ @inproceedings{knafou-etal-2025-transbert,
60
+ title = "{T}rans{BERT}: A Framework for Synthetic Translation in Domain-Specific Language Modeling",
61
+ author = {Knafou, Julien and
62
+ Mottin, Luc and
63
+ Mottaz, Ana{\"i}s and
64
+ Flament, Alexandre and
65
+ Ruch, Patrick},
66
+ editor = "Christodoulopoulos, Christos and
67
+ Chakraborty, Tanmoy and
68
+ Rose, Carolyn and
69
+ Peng, Violet",
70
+ booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2025",
71
+ month = nov,
72
+ year = "2025",
73
+ address = "Suzhou, China",
74
+ publisher = "Association for Computational Linguistics",
75
+ url = "https://aclanthology.org/2025.findings-emnlp.1053/",
76
+ doi = "10.18653/v1/2025.findings-emnlp.1053",
77
+ pages = "19338--19354",
78
+ ISBN = "979-8-89176-335-7",
79
+ abstract = "The scarcity of non-English language data in specialized domains significantly limits the development of effective Natural Language Processing (NLP) tools. We present TransBERT, a novel framework for pre-training language models using exclusively synthetically translated text, and introduce TransCorpus, a scalable translation toolkit. Focusing on the life sciences domain in French, our approach demonstrates that state-of-the-art performance on various downstream tasks can be achieved solely by leveraging synthetically translated data. We release the TransCorpus toolkit, the TransCorpus-bio-fr corpus (36.4GB of French life sciences text), TransBERT-bio-fr, its associated pre-trained language model and reproducible code for both pre-training and fine-tuning. Our results highlight the viability of synthetic translation in a high-resource translation direction for building high-quality NLP resources in low-resource language/domain pairs."
80
  }
81
  ```
82