breaking_nli_fr / README.md
maximoss's picture
Update README.md
a6ad795 verified
metadata
license: cc-by-sa-4.0
task_categories:
  - text-classification
task_ids:
  - natural-language-inference
  - multi-input-text-classification
language:
  - fr
  - en
size_categories:
  - 1K<n<10K

Dataset Card for Breaking_NLI-FR

Dataset Details

Dataset Description

This repository contains a machine-translated, manually verified French version of the Breaking_NLI dataset, originally written in English. Breaking_NLI is a dataset meant for testing methods trained to solve the natural language inference task, requiring some lexical and world knowledge.

  • Curated by: [More Information Needed]
  • Funded by [optional]: [More Information Needed]
  • Shared by [optional]: [More Information Needed]
  • Language(s) (NLP): [More Information Needed]
  • License: [More Information Needed]

Dataset Sources [optional]

  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]
  • Demo [optional]: [More Information Needed]

Uses

Direct Use

[More Information Needed]

Out-of-Scope Use

[More Information Needed]

Dataset Structure

Data Fields

  • pair_ID: A unique identifier for each sentence1--sentence2 pair.
  • sentence1_fr: Sentence 1 in French, also known as premise in other NLI datasets.
  • sentence2_fr: Sentence 2 in French, also known as hypothesis in other NLI datasets, which here is the same as the premise except for one word/phrase that was replaced..
  • gold_label: The label chosen by the majority of annotators (entailment, neutral, or contradiction).
  • category: The category which sematically groups the replaced words.
  • sentence1_en_orig: The original Sentence1 / Premise from the English source dataset.
  • sentence2_en_orig: The original Sentence2 / Hypothesis from the English source dataset.
  • annotator_labels: All of the individual labels from the three annotators.

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Data Collection and Processing

[More Information Needed]

Who are the source data producers?

[More Information Needed]

Annotations [optional]

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Bias, Risks, and Limitations

[More Information Needed]

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

Citation [optional]

BibTeX:

@inproceedings{skandalis-etal-2024-new-datasets,
    title = "New Datasets for Automatic Detection of Textual Entailment and of Contradictions between Sentences in {F}rench",
    author = "Skandalis, Maximos  and
      Moot, Richard  and
      Retor{\'e}, Christian  and
      Robillard, Simon",
    editor = "Calzolari, Nicoletta  and
      Kan, Min-Yen  and
      Hoste, Veronique  and
      Lenci, Alessandro  and
      Sakti, Sakriani  and
      Xue, Nianwen",
    booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
    month = may,
    year = "2024",
    address = "Torino, Italy",
    publisher = "ELRA and ICCL",
    url = "https://aclanthology.org/2024.lrec-main.1065",
    pages = "12173--12186",
    abstract = "This paper introduces DACCORD, an original dataset in French for automatic detection of contradictions between sentences. It also presents new, manually translated versions of two datasets, namely the well known dataset RTE3 and the recent dataset GQNLI, from English to French, for the task of natural language inference / recognising textual entailment, which is a sentence-pair classification task. These datasets help increase the admittedly limited number of datasets in French available for these tasks. DACCORD consists of 1034 pairs of sentences and is the first dataset exclusively dedicated to this task and covering among others the topic of the Russian invasion in Ukraine. RTE3-FR contains 800 examples for each of its validation and test subsets, while GQNLI-FR is composed of 300 pairs of sentences and focuses specifically on the use of generalised quantifiers. Our experiments on these datasets show that they are more challenging than the two already existing datasets for the mainstream NLI task in French (XNLI, FraCaS). For languages other than English, most deep learning models for NLI tasks currently have only XNLI available as a training set. Additional datasets, such as ours for French, could permit different training and evaluation strategies, producing more robust results and reducing the inevitable biases present in any single dataset.",
}

@inproceedings{glockner-etal-2018-breaking,
    title = "Breaking {NLI} Systems with Sentences that Require Simple Lexical Inferences",
    author = "Glockner, Max  and
      Shwartz, Vered  and
      Goldberg, Yoav",
    editor = "Gurevych, Iryna  and
      Miyao, Yusuke",
    booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
    month = jul,
    year = "2018",
    address = "Melbourne, Australia",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/P18-2103/",
    doi = "10.18653/v1/P18-2103",
    pages = "650--655",
    abstract = "We create a new NLI test set that shows the deficiency of state-of-the-art models in inferences that require lexical and world knowledge. The new examples are simpler than the SNLI test set, containing sentences that differ by at most one word from sentences in the training set. Yet, the performance on the new test set is substantially worse across systems trained on SNLI, demonstrating that these systems are limited in their generalization ability, failing to capture many simple inferences."
}

ACL:

Maximos Skandalis, Richard Moot, Christian Retoré, and Simon Robillard. 2024. New Datasets for Automatic Detection of Textual Entailment and of Contradictions between Sentences in French. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 12173–12186, Torino, Italy. ELRA and ICCL.

And

Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI Systems with Sentences that Require Simple Lexical Inferences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 650–655, Melbourne, Australia. Association for Computational Linguistics.

Acknowledgements

This translation work was carried out within the framework of research supported by the Defence Innovation Agency (AID) of the Directorate General of Armament (DGA) of the French Ministry of Armed Forces, and by the ICO, Institut Cybersécurité Occitanie, funded by Région Occitanie, France.