mT5-base_AE_SS / README.md
intelia-lab's picture
Update README.md
0f64d32 verified
metadata
datasets:
  - PlanTL-GOB-ES/SQAC
  - rajpurkar/squad_v2
language:
  - es
metrics:
  - bleu
  - meteor
  - rouge
  - sari
  - google_bleu
  - wer
base_model:
  - google/mt5-base
pipeline_tag: text2text-generation
library_name: transformers
tags:
  - AnswerExtraction
license: gpl-3.0

Modifications and Derivative Work Notice

This model is based on google/mt5-base, licensed under the Apache License 2.0.

This repository contains a modified and fine-tuned version of the original model.

Modifications include:

  • Additional training on the SQuAD+SQAC dataset to finetune the model on the Answer Extraction task
  • Hyperparameter adjustments

A detailed description of the modifications, training procedure, and experimental setup can be found in the associated paper: Evaluating the performance of multilingual models in answer extraction and question generation.

All modifications were made by INTELIA.

Citation

If you use this model in your research, please cite the following paper: Evaluating the performance of multilingual models in answer extraction and question generation.

@article{moreno-cediel_evaluating_2024,
  title = {Evaluating the performance of multilingual models in answer extraction and question generation},
  volume = {14},
  copyright = {2024 The Author(s)},
  issn = {2045-2322},
  url = {https://www.nature.com/articles/s41598-024-66472-5},
  doi = {10.1038/s41598-024-66472-5},
  language = {en},
  number = {1},
  urldate = {2025-01-10},
  journal = {Scientific Reports},
  author = {Moreno-Cediel, Antonio and del-Hoyo-Gabaldon, Jesus-Angel and Garcia-Lopez, Eva and Garcia-Cabot, Antonio and de-Fitero-Dominguez, David},
  month = jul,
  year = {2024},
  note = {Publisher: Nature Publishing Group},
  keywords = {Computer science, Software},
  pages = {15477},
}