TransCorpus-bio / README.md
jknafou's picture
Update README.md
3a6da9d verified
metadata
license: mit
language:
  - en
tags:
  - life-sciences
  - clinical
  - biomedical
  - bio
  - medical
  - biology
  - synthetic
pretty_name: TransCorpus-bio
size_categories:
  - 10M<n<100M

TransCorpus-bio

TransCorpus-bio is a large-scale, parallel biomedical corpus consisting of PubMed abstracts. This dataset is used in the TransCorpus Toolkit and is designed to enable high-quality multi-lingual biomedical language modeling and downstream NLP research.

Currently Translated with TransCorpus Toolkit

Dataset Details

  • Source: PubMed abstracts (English)
  • Size: 22 million abstracts, 30.2GB of text
  • Domain: Biomedical, clinical, life sciences
  • Format: one abstract per line

Motivation

Non-English languages are low-resource languages for biomedical NLP, with limited availability of large, high-quality corpora. TransCorpus-bio bridges this gap by leveraging state-of-the-art neural machine translation to generate a massive, high-quality synthetic corpus, enabling robust pretraining and evaluation of Spanish biomedical language models.

from datasets import load_dataset

dataset = load_dataset("jknafou/TransCorpus-bio", split="train")

print(dataset)
# Output:
# Dataset({
#    features: ['text'],
#    num_rows: 21567136
# })

print(dataset[0])

Benchmark Results in our French Experiment

TransBERT-bio-fr pretrained on TransCorpus-bio-fr achieve state-of-the-art results on the French biomedical benchmark DrBenchmark, outperforming both general-domain and previous domain-specific models on classification, NER, POS, and STS tasks. See TransBERT-bio-fr for details.

Why Synthetic Translation?

  • Scalable: Enables creation of large-scale corpora for any language with a strong MT system.
  • Effective: Supports state-of-the-art performance in downstream tasks.
  • Accessible: Makes domain-specific NLP feasible for any languages.

Citation

If you use this corpus, please cite:

  @inproceedings{knafou-etal-2025-transbert,
      title = "{T}rans{BERT}: A Framework for Synthetic Translation in Domain-Specific Language Modeling",
      author = {Knafou, Julien  and
        Mottin, Luc  and
        Mottaz, Ana{\"i}s  and
        Flament, Alexandre  and
        Ruch, Patrick},
      editor = "Christodoulopoulos, Christos  and
        Chakraborty, Tanmoy  and
        Rose, Carolyn  and
        Peng, Violet",
      booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2025",
      month = nov,
      year = "2025",
      address = "Suzhou, China",
      publisher = "Association for Computational Linguistics",
      url = "https://aclanthology.org/2025.findings-emnlp.1053/",
      doi = "10.18653/v1/2025.findings-emnlp.1053",
      pages = "19338--19354",
      ISBN = "979-8-89176-335-7",
      abstract = "The scarcity of non-English language data in specialized domains significantly limits the development of effective Natural Language Processing (NLP) tools. We present TransBERT, a novel framework for pre-training language models using exclusively synthetically translated text, and introduce TransCorpus, a scalable translation toolkit. Focusing on the life sciences domain in French, our approach demonstrates that state-of-the-art performance on various downstream tasks can be achieved solely by leveraging synthetically translated data. We release the TransCorpus toolkit, the TransCorpus-bio-fr corpus (36.4GB of French life sciences text), TransBERT-bio-fr, its associated pre-trained language model and reproducible code for both pre-training and fine-tuning. Our results highlight the viability of synthetic translation in a high-resource translation direction for building high-quality NLP resources in low-resource language/domain pairs."
  }