Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
License:
ACAData / README.md
ilacunza's picture
Update README.md
d142c35 verified
metadata
license: cc-by-4.0
task_categories:
  - translation
language:
  - es
  - en
  - ca
  - pt
  - fr
  - eu
  - gl
  - de
  - nl
  - el
  - it
size_categories:
  - 1M<n<10M
configs:
  - config_name: default
    data_files:
      - split: train
        path: acadtrain.parquet
      - split: test
        path: acadbench/acadbench.parquet

Dataset Card for ACAData

Dataset Description

Dataset Summary

ACAData is a multilingual instruction tuning dataset containing parallel text paragraphs from the academic domain.

Supported Tasks and Leaderboards

The dataset is meant to be used for fine-tuning and benchmarking general purpose LLM's on Machine Translation tasks.

Languages

The dataset contains (mainly long) paragraph of scientific texts from the academic domain in many European language pairs. The language coverage and distribution of the dataset is represented in the following tables. For further details, we refer to the paper ACADATA: Parallel Dataset of Academic Data for Machine Translation.

Dataset Structure

ACAData is composed of two different subsets: ACAD-Train and ACAD-Bench. The first is intended for training while the second serves as the benchmarking split.

IMPORTANT:

ACAD-Train is released in raw format as a Parquet file where each row contains a paragraph aligned across multiple languages, with one language per column, with a total number of 739,211 raw instances. This corresponds to the dataset before conversion into the instruction format described in ACADATA: Parallel Dataset of Academic Data for Machine Translation. During conversion, each parallel pair is used to generate two instruction instances (one per translation direction), resulting in 1,478,422 training instances.

ACAD-Bench is also released in raw format as Parquet file, but in this case the entire dataset is contained, where each pair has already been duplicated and swapped to cover both translation directions. Total number of instances: 5,944. ACAD-Bench is ready to be used for model evaluation.

Data Instances

The key characteristics of ACAD-Train are the following: image/png

The key characteristics of ACAD-Bench are the following: image/png

Both splits have the following structure:

  lang1_code  lang2_code  lang1                                               lang2
0        ast         ca  Introducción al analisis forense con distribuc...  Introducció a l'anàlisi forense...
1        ast         ca  Creación de un almacén de datos...                 Creació d'un magatzem de dades ...
2        ast         ca  Monografía ilustrada sobre la i...                 Monografia il·lustrada sobre la...
3        ast         ca  Entrevista con el escritor alba...                 Entrevista amb l'escriptor alba...
4        ast         en  Afondamos nesti trabayu con abo...                 Following the short essay Topon...

On the other hand, even if ACAD-Bench is provided in raw format, the evaluations provided in the Evaluation section of ACADATA: Parallel Dataset of Academic Data for Machine Translation have been carried out using the following structure (Catalan → English example):

{
    "id": "test_ca-en_abstract_dataset_{idx}",
    "task": "abstract_dataset",
    "lang": "ca-en",
    "conversations": [
      {
        "from": "human",
        "value": "Translate the following text from Catalan to English.\nCatalan: {lang1}\:"
      },
      {
        "from": "gpt",
        "value": "{lang2}"
      }
    ]
  },

Data Fields

  • lang1_code: ISO language code of the text in lang1 (the first text in the pair).
  • lang2_code: ISO language code of the text in lang2 (the second text in the pair).
  • lang1: The first text in the bilingual instance.
  • lang2: The second text in the bilingual instance.

Data Splits

The dataset contains two splits: train(ACAD-Train) and benchmarking (ACAD-Bench).

Dataset Creation

Curation Rationale

This dataset is aimed at improving the Machine Translation performance of LLMs in the academic domain.

Source Data

Translation pairs were harvested from the metadata of multiple European Academic repositories using the OAI-PMH protocol. For each harvested metadata record, we extracted the textual content from the record’s "description" field and used those texts as the source for candidate segments.

Initial Data Collection and Normalization

Using OAI-PMH, we inspected each record’s description field to detect multiple entries. When multiple entries were present, we extracted embeddings for each entry with LaBSE, computed pairwise cosine similarities, and selected translation pairs with similarity ≥ 0.80. Language identification was then performed using GlotLID.

For normalization, we applied preprocessing before embedding and language ID: stripped leading language markers (e.g., “(Spanish)”, “(eng)”); normalized punctuation and typography (converted all quotation marks and apostrophes to ASCII equivalents, replaced masculine ordinals “º” with degree symbols “°”, and converted superscript/subscript digits to regular digits); removed common inline markers (short bracketed/parenthesized codes, leading // or :); collapsed simple HTML tags; and collapsed repeated whitespace into single spaces. We also applied Unicode NFKC normalization and, where appropriate, lowercasing to ensure consistent tokenization and more stable embeddings.

Who are the source language producers?

In the following table, we provide a complete list of the source repositories from where the data were extracted (the shown number of instances is before deduplication).

image/png

Annotations

Annotation process

The dataset does not contain any annotations.

Who are the annotators?

[N/A]

Personal and Sensitive Information

No specific anonymization process has been applied. Personal and sensitive information might be present in the data. This needs to be considered when using the data for fine-tuning models.

Evaluation

Aggregated results for the XX ↔ EN and XX ↔ ES translation directions in ACAD-Bench dataset. Baselines are grouped into large-scale proprietary general models, medium- to small-sized open-weights models and dedicated MMNMT models. For every metric, the top-scoring system is shown in bold. For a more detailed evaluation analysis, please refer to the paper.

xx → en
Direction Model d-BLEU BP Blonde Comet Comet-Kiwi
XX → EN GPT-mini 46.03 1.00 0.60 0.84 0.77
GPT-nano 41.30 0.97 0.55 0.84 0.78
Gemini-2 48.65 1.00 0.61 0.84 0.77
Gemini-2.5 45.10 0.98 0.58 0.84 0.77
Llama-3-8B 43.12 0.99 0.56 0.83 0.76
Gemma-3-27B 46.37 0.98 0.59 0.84 0.77
MADLAD-7B 38.69 0.86 0.51 0.81 0.77
Salamandra-2B 37.09 0.92 0.52 0.82 0.75
  + ACADTRAIN 48.45 1.00 0.61 0.83 0.76
Salamandra-7B 45.87 0.99 0.59 0.83 0.76
  + ACADTRAIN 50.07 1.00 0.62 0.84 0.76
en → xx
Direction Model d-BLEU BP Blonde Comet Comet-Kiwi
EN → XX GPT-mini 45.01 0.99 - 0.86 0.82
GPT-nano 43.78 1.00 - 0.86 0.82
Gemini-2 48.00 0.99 - 0.87 0.82
Gemini-2.5 47.75 0.99 - 0.87 0.82
Llama-3-8B 39.87 0.99 - 0.85 0.81
Gemma-3-27B 46.29 0.99 - 0.86 0.82
MADLAD-7B 36.08 0.82 - 0.83 0.80
Salamandra-2B 32.91 0.90 - 0.83 0.78
  + ACADTRAIN 46.86 0.98 - 0.86 0.81
Salamandra-7B 42.55 0.98 - 0.86 0.81
  + ACADTRAIN 49.20 0.98 - 0.86 0.81
xx → es
Direction Model d-BLEU BP Blonde Comet Comet-Kiwi
XX → ES GPT-mini 60.60 0.98 - 0.86 0.82
GPT-nano 57.88 0.99 - 0.86 0.82
Gemini-2 62.02 0.99 - 0.86 0.82
Gemini-2.5 61.43 0.98 - 0.87 0.82
Llama-3-8B 55.4 0.98 - 0.86 0.81
Gemma-3-27B 60.71 0.98 - 0.86 0.82
MADLAD-7B 43.44 0.76 - 0.83 0.81
Salamandra-2B 50.09 0.92 - 0.85 0.80
  + ACADTRAIN 61.97 0.98 - 0.86 0.82
Salamandra-7B 57.55 0.98 - 0.86 0.82
  + ACADTRAIN 63.60 0.98 - 0.86 0.82
es → xx
Direction Model d-BLEU BP Blonde Comet Comet-Kiwi
ES → XX GPT-mini 54.19 0.99 - 0.86 0.81
GPT-nano 51.95 0.99 - 0.86 0.81
Gemini-2 60.28 0.99 - 0.86 0.81
Gemini-2.5 57.61 0.99 - 0.86 0.81
Llama-3-8B 52.12 0.99 - 0.85 0.80
Gemma-3-27B 57.31 0.99 - 0.86 0.81
MADLAD-7B 40.13 0.79 - 0.83 0.81
Salamandra-2B 47.84 0.94 - 0.84 0.80
  + ACADTRAIN 60.09 0.99 - 0.86 0.81
Salamandra-7B 55.65 0.98 - 0.86 0.80
  + ACADTRAIN 61.61 0.99 - 0.86 0.81

Considerations for Using the Data

Discussion of Biases

No specific bias mitigation strategies were applied to this dataset. Inherent biases may exist within the data.

Other Known Limitations

The dataset contains data of the academic domain. Applications of this dataset in domains or languages not included in the training set would be of limited use.

Additional Information

Dataset Curators

Language Technologies Unit at the Barcelona Supercomputing Center (langtech@bsc.es).

Funding

This work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project Modelos del Lenguaje.

This work has been promoted and financed by the Government of Catalonia through the Aina project.

This work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project ILENIA with reference 2022/TL22/00215337.

Licensing Information

This work is licensed under an Attribution 4.0 International license.

Citation Information

@misc{lacunza2025acadataparalleldatasetacademic,
      title={ACADATA: Parallel Dataset of Academic Data for Machine Translation}, 
      author={Iñaki Lacunza and Javier Garcia Gilabert and Francesca De Luca Fornaciari and Javier Aula-Blasco and Aitor Gonzalez-Agirre and Maite Melero and Marta Villegas},
      year={2025},
      eprint={2510.12621},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2510.12621}, 
}

Contributions

By releasing ACAD-train, ACAD-bench, and the fine-tuned models under permissive licenses, we offer the community a robust foundation training dataset and evaluation benchmark for advancing the development of machine translation systems in the academic domain. Ultimately, with this work, we aim to help bridge communication across the global scientific community, and make research more discoverable and accessible regardless of the language it was originally published in.