alicetrismik commited on
Commit
6047919
·
verified ·
1 Parent(s): 8b412e0

Update README with proper attribution to Stanford NLP

Browse files
Files changed (1) hide show
  1. README.md +62 -22
README.md CHANGED
@@ -1,24 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # ContractNLI: A Dataset for Document-level Natural Language Inference for Contracts
2
 
 
 
 
 
 
 
 
 
 
 
3
  ContractNLI is a dataset for document-level natural language inference (NLI) on contracts whose goal is to automate/support a time-consuming procedure of contract review.
4
  In this task, a system is given a set of hypotheses (such as "Some obligations of Agreement may survive termination.") and a contract, and it is asked to classify whether each hypothesis is _entailed by_, _contradicting to_ or _not mentioned by_ (neutral to) the contract as well as identifying _evidence_ for the decision as spans in the contract.
5
 
6
  ContractNLI is the first dataset to utilize NLI for contracts and is also the largest corpus of annotated contracts (as of September 2021).
7
  ContractNLI is an interesting challenge to work on from a machine learning perspective (the label distribution is imbalanced and it is naturally multi-task, all the while training data being scarce) and from a linguistic perspective (linguistic characteristics of contracts, particularly negations by exceptions, make the problem difficult).
8
 
9
- Details of ContractNLI can be found in our paper that was published in "Findings of EMNLP 2021".
10
- If you have a question regarding our dataset, you can contact us by emailing koreeda@stanford.edu or by creating an issue in this repository.
11
 
12
- ## Dataset specification
 
 
 
 
13
 
14
  More formally, the task consists of:
15
  * **Natural language inference (NLI)**: Document-level three-class classification (one of `Entailment`, `Contradiction` or `NotMentioned`).
16
  * **Evidence identification**: Multi-label binary classification over _span_s, where a _span_ is a sentence or a list item within a sentence. This is only defined when NLI label is either `Entailment` or `Contradiction`. Evidence spans need not be contiguous but need to be comprehensively identified where they are redundant.
17
 
18
- We have 17 hypotheses annotated on 607 non-disclosure agreements (NDAs).
19
- The hypotheses are fixed throughout all the contracts including the test dataset.
 
 
 
20
 
21
- Our dataset is provided as JSON files.
22
 
23
  ```json
24
  {
@@ -66,46 +98,54 @@ Our dataset is provided as JSON files.
66
  }
67
  ```
68
 
69
- The core information in our dataset is:
 
 
70
  * `text`: The full document text
71
  * `spans`: List of spans as pairs of the start and end character indices.
72
  * `annotation_sets`: It is provided as a list to accommodate multiple annotations per document. Since we only have a single annotation for each document, you may safely access the appropriate annotation by `document['annotation_sets'][0]['annotations']`.
73
  * `annotations`: Each key represents a hypothesis key. `choice` is either `Entailment`, `Contradiction` or `NotMentioned`. `spans` is given as indices of `spans` above. `spans` is empty when `choice` is `NotMentioned`.
74
  * `labels`: Each key represents a hypothesis key. `hypothesis` is the hypothesis text that should be used in NLI.
75
 
76
- The JSON file comes with supplemental information. Users may simply ignore the information if you are only interested in developing machine learning systems.
77
  * `id`: A unique ID throughout train, development and test datasets.
78
  * `file_name`: The filename of the original document in the dataset zip file.
79
  * `document_type`: One of `search-pdf` (a PDF from a search engine), `sec-text` (a text file from SEC filing) or `sec-html` (an HTML file from SEC filing).
80
  * `url`: The URL that we obtained the document from.
81
 
 
82
 
83
- ## Baseline system
84
-
85
- In our paper, we introduced Span NLI BERT, a strong baseline for our task.
86
- It (1) makes the problem of evidence identification easier by modeling the problem as multi-label classification over spans instead of trying to predict the start and end tokens, and (b) introduces more sophisticated context segmentation to deal with long documents.
87
- We showed in our paper that Span NLI BERT significantly outperforms the existing models.
88
 
89
- You can find the implementation of Span NLI BERT in [another repository](https://github.com/stanfordnlp/contract-nli-bert).
90
 
91
  ## License
92
 
93
- Our dataset is released under CC BY 4.0.
94
- Please refer attached "[LICENSE](./LICENSE)" or https://creativecommons.org/licenses/by/4.0/ for the exact terms.
 
 
95
 
96
- When you use our dataset in your work, please cite our paper:
97
 
98
  ```bibtex
99
  @inproceedings{koreeda-manning-2021-contractnli,
100
  title = "ContractNLI: A Dataset for Document-level Natural Language Inference for Contracts",
101
- author = "Koreeda, Yuta and
102
- Manning, Christopher D.",
103
  booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
104
  year = "2021",
105
- publisher = "Association for Computational Linguistics"
 
106
  }
107
  ```
108
 
109
- ## Changelog and release note
 
 
 
 
 
 
110
 
111
- * 10/5/2021: Initial release
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - text-classification
5
+ - token-classification
6
+ language:
7
+ - en
8
+ tags:
9
+ - nli
10
+ - natural-language-inference
11
+ - contracts
12
+ - legal
13
+ size_categories:
14
+ - n<1K
15
+ ---
16
+
17
  # ContractNLI: A Dataset for Document-level Natural Language Inference for Contracts
18
 
19
+ > **Note**: This is a mirror/copy of the original ContractNLI dataset created by Stanford NLP.
20
+ >
21
+ > **Original Source**: https://github.com/stanfordnlp/contract-nli
22
+ > **Authors**: Yuta Koreeda and Christopher D. Manning (Stanford University)
23
+ > **Paper**: [Findings of EMNLP 2021](https://aclanthology.org/2021.findings-emnlp.164/)
24
+ >
25
+ > This repository is provided for easier access and integration with Hugging Face datasets. All credit goes to the original authors.
26
+
27
+ ## Dataset Description
28
+
29
  ContractNLI is a dataset for document-level natural language inference (NLI) on contracts whose goal is to automate/support a time-consuming procedure of contract review.
30
  In this task, a system is given a set of hypotheses (such as "Some obligations of Agreement may survive termination.") and a contract, and it is asked to classify whether each hypothesis is _entailed by_, _contradicting to_ or _not mentioned by_ (neutral to) the contract as well as identifying _evidence_ for the decision as spans in the contract.
31
 
32
  ContractNLI is the first dataset to utilize NLI for contracts and is also the largest corpus of annotated contracts (as of September 2021).
33
  ContractNLI is an interesting challenge to work on from a machine learning perspective (the label distribution is imbalanced and it is naturally multi-task, all the while training data being scarce) and from a linguistic perspective (linguistic characteristics of contracts, particularly negations by exceptions, make the problem difficult).
34
 
35
+ ### Original Contact
 
36
 
37
+ For questions about the dataset, please contact the original authors:
38
+ - Email: koreeda@stanford.edu
39
+ - GitHub Issues: https://github.com/stanfordnlp/contract-nli/issues
40
+
41
+ ## Dataset Specification
42
 
43
  More formally, the task consists of:
44
  * **Natural language inference (NLI)**: Document-level three-class classification (one of `Entailment`, `Contradiction` or `NotMentioned`).
45
  * **Evidence identification**: Multi-label binary classification over _span_s, where a _span_ is a sentence or a list item within a sentence. This is only defined when NLI label is either `Entailment` or `Contradiction`. Evidence spans need not be contiguous but need to be comprehensively identified where they are redundant.
46
 
47
+ The dataset contains:
48
+ - 17 hypotheses annotated on 607 non-disclosure agreements (NDAs)
49
+ - The hypotheses are fixed throughout all the contracts including the test dataset
50
+
51
+ ### Data Format
52
 
53
+ The dataset is provided as JSON files (`train.json`, `dev.json`, `test.json`).
54
 
55
  ```json
56
  {
 
98
  }
99
  ```
100
 
101
+ ### Field Descriptions
102
+
103
+ **Core fields:**
104
  * `text`: The full document text
105
  * `spans`: List of spans as pairs of the start and end character indices.
106
  * `annotation_sets`: It is provided as a list to accommodate multiple annotations per document. Since we only have a single annotation for each document, you may safely access the appropriate annotation by `document['annotation_sets'][0]['annotations']`.
107
  * `annotations`: Each key represents a hypothesis key. `choice` is either `Entailment`, `Contradiction` or `NotMentioned`. `spans` is given as indices of `spans` above. `spans` is empty when `choice` is `NotMentioned`.
108
  * `labels`: Each key represents a hypothesis key. `hypothesis` is the hypothesis text that should be used in NLI.
109
 
110
+ **Supplemental fields:**
111
  * `id`: A unique ID throughout train, development and test datasets.
112
  * `file_name`: The filename of the original document in the dataset zip file.
113
  * `document_type`: One of `search-pdf` (a PDF from a search engine), `sec-text` (a text file from SEC filing) or `sec-html` (an HTML file from SEC filing).
114
  * `url`: The URL that we obtained the document from.
115
 
116
+ ## Baseline System
117
 
118
+ In the original paper, the authors introduced **Span NLI BERT**, a strong baseline for this task.
119
+ It (1) makes the problem of evidence identification easier by modeling the problem as multi-label classification over spans instead of trying to predict the start and end tokens, and (2) introduces more sophisticated context segmentation to deal with long documents.
 
 
 
120
 
121
+ Implementation: https://github.com/stanfordnlp/contract-nli-bert
122
 
123
  ## License
124
 
125
+ This dataset is released under **CC BY 4.0** license.
126
+ Please refer to [LICENSE](./LICENSE) or https://creativecommons.org/licenses/by/4.0/ for the exact terms.
127
+
128
+ ## Citation
129
 
130
+ **Please cite the original paper when using this dataset:**
131
 
132
  ```bibtex
133
  @inproceedings{koreeda-manning-2021-contractnli,
134
  title = "ContractNLI: A Dataset for Document-level Natural Language Inference for Contracts",
135
+ author = "Koreeda, Yuta and Manning, Christopher D.",
 
136
  booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
137
  year = "2021",
138
+ publisher = "Association for Computational Linguistics",
139
+ url = "https://aclanthology.org/2021.findings-emnlp.164/",
140
  }
141
  ```
142
 
143
+ ## Original Repository
144
+
145
+ - **GitHub**: https://github.com/stanfordnlp/contract-nli
146
+ - **Paper**: https://aclanthology.org/2021.findings-emnlp.164/
147
+ - **Code (Span NLI BERT)**: https://github.com/stanfordnlp/contract-nli-bert
148
+
149
+ ## Changelog
150
 
151
+ * 10/5/2021: Initial release by Stanford NLP