ContractNLI / README.md
alicetrismik's picture
Update README with proper attribution to Stanford NLP
6047919 verified
metadata
license: cc-by-4.0
task_categories:
  - text-classification
  - token-classification
language:
  - en
tags:
  - nli
  - natural-language-inference
  - contracts
  - legal
size_categories:
  - n<1K

ContractNLI: A Dataset for Document-level Natural Language Inference for Contracts

Note: This is a mirror/copy of the original ContractNLI dataset created by Stanford NLP.

Original Source: https://github.com/stanfordnlp/contract-nli Authors: Yuta Koreeda and Christopher D. Manning (Stanford University) Paper: Findings of EMNLP 2021

This repository is provided for easier access and integration with Hugging Face datasets. All credit goes to the original authors.

Dataset Description

ContractNLI is a dataset for document-level natural language inference (NLI) on contracts whose goal is to automate/support a time-consuming procedure of contract review. In this task, a system is given a set of hypotheses (such as "Some obligations of Agreement may survive termination.") and a contract, and it is asked to classify whether each hypothesis is entailed by, contradicting to or not mentioned by (neutral to) the contract as well as identifying evidence for the decision as spans in the contract.

ContractNLI is the first dataset to utilize NLI for contracts and is also the largest corpus of annotated contracts (as of September 2021). ContractNLI is an interesting challenge to work on from a machine learning perspective (the label distribution is imbalanced and it is naturally multi-task, all the while training data being scarce) and from a linguistic perspective (linguistic characteristics of contracts, particularly negations by exceptions, make the problem difficult).

Original Contact

For questions about the dataset, please contact the original authors:

Dataset Specification

More formally, the task consists of:

  • Natural language inference (NLI): Document-level three-class classification (one of Entailment, Contradiction or NotMentioned).
  • Evidence identification: Multi-label binary classification over _span_s, where a span is a sentence or a list item within a sentence. This is only defined when NLI label is either Entailment or Contradiction. Evidence spans need not be contiguous but need to be comprehensively identified where they are redundant.

The dataset contains:

  • 17 hypotheses annotated on 607 non-disclosure agreements (NDAs)
  • The hypotheses are fixed throughout all the contracts including the test dataset

Data Format

The dataset is provided as JSON files (train.json, dev.json, test.json).

{
  "documents": [
    {
      "id": 1,
      "file_name": "example.pdf",
      "text": "NON-DISCLOSURE AGREEMENT\nThis NON-DISCLOSURE AGREEMENT (\"Agreement\") is entered into this ...",
      "document_type": "search-pdf",
      "url": "https://examplecontract.com/example.pdf",
      "spans": [
        [0, 24],
        [25, 89],
        ...
      ],
      "annotation_sets": [
        {
          "annotations": {
            "nda-1": {
              "choice": "Entailment",
              "spans": [
                12,
                13,
                91
              ]
            },
            "nda-2": {
              "choice": "NotMentioned",
              "spans": []
            },
            ...
          }
        }
      ]
    },
    ...
  ],
  "labels": {
    "nda-1": {
      "short_description": "Explicit identification",
      "hypothesis": "All Confidential Information shall be expressly identified by the Disclosing Party."
    },
    ...
  }
}

Field Descriptions

Core fields:

  • text: The full document text
  • spans: List of spans as pairs of the start and end character indices.
  • annotation_sets: It is provided as a list to accommodate multiple annotations per document. Since we only have a single annotation for each document, you may safely access the appropriate annotation by document['annotation_sets'][0]['annotations'].
  • annotations: Each key represents a hypothesis key. choice is either Entailment, Contradiction or NotMentioned. spans is given as indices of spans above. spans is empty when choice is NotMentioned.
  • labels: Each key represents a hypothesis key. hypothesis is the hypothesis text that should be used in NLI.

Supplemental fields:

  • id: A unique ID throughout train, development and test datasets.
  • file_name: The filename of the original document in the dataset zip file.
  • document_type: One of search-pdf (a PDF from a search engine), sec-text (a text file from SEC filing) or sec-html (an HTML file from SEC filing).
  • url: The URL that we obtained the document from.

Baseline System

In the original paper, the authors introduced Span NLI BERT, a strong baseline for this task. It (1) makes the problem of evidence identification easier by modeling the problem as multi-label classification over spans instead of trying to predict the start and end tokens, and (2) introduces more sophisticated context segmentation to deal with long documents.

Implementation: https://github.com/stanfordnlp/contract-nli-bert

License

This dataset is released under CC BY 4.0 license. Please refer to LICENSE or https://creativecommons.org/licenses/by/4.0/ for the exact terms.

Citation

Please cite the original paper when using this dataset:

@inproceedings{koreeda-manning-2021-contractnli,
    title = "ContractNLI: A Dataset for Document-level Natural Language Inference for Contracts",
    author = "Koreeda, Yuta and Manning, Christopher D.",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
    year = "2021",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.findings-emnlp.164/",
}

Original Repository

Changelog

  • 10/5/2021: Initial release by Stanford NLP