e2e_nlg_french / README.md
bourdoiscatie's picture
Update README.md
65afebe verified
metadata
dataset_info:
  features:
    - name: data
      dtype: string
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 8160924
      num_examples: 33525
  download_size: 2382711
  dataset_size: 8160924
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
language:
  - fra
license: cc-by-sa-4.0
task_categories:
  - table-to-text

Description

French translation of the E2E NLG dataset.
According of the English version dataset card, The E2E NLG dataset is a benchmark dataset for data-to-text models that verbalize a set of 2-9 key-value attribute pairs in the restaurant domain. The version used for GEM is the cleaned E2E NLG dataset, which filters examples with hallucinations and outputs that don't fully cover all input attributes.

You can find the main data card on the GEM Website and we invite you to consult the paper in the below citation part.

Citation

@inproceedings{dusek-etal-2019-semantic,
    title = "Semantic Noise Matters for Neural Natural Language Generation",
    author = "Du{\v{s}}ek, Ond{\v{r}}ej  and
      Howcroft, David M.  and
      Rieser, Verena",
    editor = "van Deemter, Kees  and
      Lin, Chenghua  and
      Takamura, Hiroya",
    booktitle = "Proceedings of the 12th International Conference on Natural Language Generation",
    month = oct # "–" # nov,
    year = "2019",
    address = "Tokyo, Japan",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/W19-8652/",
    doi = "10.18653/v1/W19-8652",
    pages = "421--426",
    abstract = "Neural natural language generation (NNLG) systems are known for their pathological outputs, i.e. generating text which is unrelated to the input specification. In this paper, we show the impact of semantic noise on state-of-the-art NNLG models which implement different semantic control mechanisms. We find that cleaned data can improve semantic correctness by up to 97{\%}, while maintaining fluency. We also find that the most common error is omitting information, rather than hallucination."
}