Datasets:
Upload README.md with huggingface_hub
Browse files
README.md
ADDED
|
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ContractNLI: A Dataset for Document-level Natural Language Inference for Contracts
|
| 2 |
+
|
| 3 |
+
ContractNLI is a dataset for document-level natural language inference (NLI) on contracts whose goal is to automate/support a time-consuming procedure of contract review.
|
| 4 |
+
In this task, a system is given a set of hypotheses (such as "Some obligations of Agreement may survive termination.") and a contract, and it is asked to classify whether each hypothesis is _entailed by_, _contradicting to_ or _not mentioned by_ (neutral to) the contract as well as identifying _evidence_ for the decision as spans in the contract.
|
| 5 |
+
|
| 6 |
+
ContractNLI is the first dataset to utilize NLI for contracts and is also the largest corpus of annotated contracts (as of September 2021).
|
| 7 |
+
ContractNLI is an interesting challenge to work on from a machine learning perspective (the label distribution is imbalanced and it is naturally multi-task, all the while training data being scarce) and from a linguistic perspective (linguistic characteristics of contracts, particularly negations by exceptions, make the problem difficult).
|
| 8 |
+
|
| 9 |
+
Details of ContractNLI can be found in our paper that was published in "Findings of EMNLP 2021".
|
| 10 |
+
If you have a question regarding our dataset, you can contact us by emailing koreeda@stanford.edu or by creating an issue in this repository.
|
| 11 |
+
|
| 12 |
+
## Dataset specification
|
| 13 |
+
|
| 14 |
+
More formally, the task consists of:
|
| 15 |
+
* **Natural language inference (NLI)**: Document-level three-class classification (one of `Entailment`, `Contradiction` or `NotMentioned`).
|
| 16 |
+
* **Evidence identification**: Multi-label binary classification over _span_s, where a _span_ is a sentence or a list item within a sentence. This is only defined when NLI label is either `Entailment` or `Contradiction`. Evidence spans need not be contiguous but need to be comprehensively identified where they are redundant.
|
| 17 |
+
|
| 18 |
+
We have 17 hypotheses annotated on 607 non-disclosure agreements (NDAs).
|
| 19 |
+
The hypotheses are fixed throughout all the contracts including the test dataset.
|
| 20 |
+
|
| 21 |
+
Our dataset is provided as JSON files.
|
| 22 |
+
|
| 23 |
+
```json
|
| 24 |
+
{
|
| 25 |
+
"documents": [
|
| 26 |
+
{
|
| 27 |
+
"id": 1,
|
| 28 |
+
"file_name": "example.pdf",
|
| 29 |
+
"text": "NON-DISCLOSURE AGREEMENT\nThis NON-DISCLOSURE AGREEMENT (\"Agreement\") is entered into this ...",
|
| 30 |
+
"document_type": "search-pdf",
|
| 31 |
+
"url": "https://examplecontract.com/example.pdf",
|
| 32 |
+
"spans": [
|
| 33 |
+
[0, 24],
|
| 34 |
+
[25, 89],
|
| 35 |
+
...
|
| 36 |
+
],
|
| 37 |
+
"annotation_sets": [
|
| 38 |
+
{
|
| 39 |
+
"annotations": {
|
| 40 |
+
"nda-1": {
|
| 41 |
+
"choice": "Entailment",
|
| 42 |
+
"spans": [
|
| 43 |
+
12,
|
| 44 |
+
13,
|
| 45 |
+
91
|
| 46 |
+
]
|
| 47 |
+
},
|
| 48 |
+
"nda-2": {
|
| 49 |
+
"choice": "NotMentioned",
|
| 50 |
+
"spans": []
|
| 51 |
+
},
|
| 52 |
+
...
|
| 53 |
+
}
|
| 54 |
+
}
|
| 55 |
+
]
|
| 56 |
+
},
|
| 57 |
+
...
|
| 58 |
+
],
|
| 59 |
+
"labels": {
|
| 60 |
+
"nda-1": {
|
| 61 |
+
"short_description": "Explicit identification",
|
| 62 |
+
"hypothesis": "All Confidential Information shall be expressly identified by the Disclosing Party."
|
| 63 |
+
},
|
| 64 |
+
...
|
| 65 |
+
}
|
| 66 |
+
}
|
| 67 |
+
```
|
| 68 |
+
|
| 69 |
+
The core information in our dataset is:
|
| 70 |
+
* `text`: The full document text
|
| 71 |
+
* `spans`: List of spans as pairs of the start and end character indices.
|
| 72 |
+
* `annotation_sets`: It is provided as a list to accommodate multiple annotations per document. Since we only have a single annotation for each document, you may safely access the appropriate annotation by `document['annotation_sets'][0]['annotations']`.
|
| 73 |
+
* `annotations`: Each key represents a hypothesis key. `choice` is either `Entailment`, `Contradiction` or `NotMentioned`. `spans` is given as indices of `spans` above. `spans` is empty when `choice` is `NotMentioned`.
|
| 74 |
+
* `labels`: Each key represents a hypothesis key. `hypothesis` is the hypothesis text that should be used in NLI.
|
| 75 |
+
|
| 76 |
+
The JSON file comes with supplemental information. Users may simply ignore the information if you are only interested in developing machine learning systems.
|
| 77 |
+
* `id`: A unique ID throughout train, development and test datasets.
|
| 78 |
+
* `file_name`: The filename of the original document in the dataset zip file.
|
| 79 |
+
* `document_type`: One of `search-pdf` (a PDF from a search engine), `sec-text` (a text file from SEC filing) or `sec-html` (an HTML file from SEC filing).
|
| 80 |
+
* `url`: The URL that we obtained the document from.
|
| 81 |
+
|
| 82 |
+
|
| 83 |
+
## Baseline system
|
| 84 |
+
|
| 85 |
+
In our paper, we introduced Span NLI BERT, a strong baseline for our task.
|
| 86 |
+
It (1) makes the problem of evidence identification easier by modeling the problem as multi-label classification over spans instead of trying to predict the start and end tokens, and (b) introduces more sophisticated context segmentation to deal with long documents.
|
| 87 |
+
We showed in our paper that Span NLI BERT significantly outperforms the existing models.
|
| 88 |
+
|
| 89 |
+
You can find the implementation of Span NLI BERT in [another repository](https://github.com/stanfordnlp/contract-nli-bert).
|
| 90 |
+
|
| 91 |
+
## License
|
| 92 |
+
|
| 93 |
+
Our dataset is released under CC BY 4.0.
|
| 94 |
+
Please refer attached "[LICENSE](./LICENSE)" or https://creativecommons.org/licenses/by/4.0/ for the exact terms.
|
| 95 |
+
|
| 96 |
+
When you use our dataset in your work, please cite our paper:
|
| 97 |
+
|
| 98 |
+
```bibtex
|
| 99 |
+
@inproceedings{koreeda-manning-2021-contractnli,
|
| 100 |
+
title = "ContractNLI: A Dataset for Document-level Natural Language Inference for Contracts",
|
| 101 |
+
author = "Koreeda, Yuta and
|
| 102 |
+
Manning, Christopher D.",
|
| 103 |
+
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
|
| 104 |
+
year = "2021",
|
| 105 |
+
publisher = "Association for Computational Linguistics"
|
| 106 |
+
}
|
| 107 |
+
```
|
| 108 |
+
|
| 109 |
+
## Changelog and release note
|
| 110 |
+
|
| 111 |
+
* 10/5/2021: Initial release
|