| # PIE Dataset Card for "cdcp" | |
| This is a [PyTorch-IE](https://github.com/ChristophAlt/pytorch-ie) wrapper for the | |
| [CDCP Huggingface dataset loading script](https://huggingface.co/datasets/DFKI-SLT/cdcp). | |
| ## Usage | |
| ```python | |
| from pie_datasets import load_dataset | |
| from pie_documents.documents import TextDocumentWithLabeledSpansAndBinaryRelations | |
| # load English variant | |
| dataset = load_dataset("pie/cdcp") | |
| # if required, normalize the document type (see section Document Converters below) | |
| dataset_converted = dataset.to_document_type(TextDocumentWithLabeledSpansAndBinaryRelations) | |
| assert isinstance(dataset_converted["train"][0], TextDocumentWithLabeledSpansAndBinaryRelations) | |
| # get first relation in the first document | |
| doc = dataset_converted["train"][0] | |
| print(doc.binary_relations[0]) | |
| # BinaryRelation(head=LabeledSpan(start=0, end=78, label='value', score=1.0), tail=LabeledSpan(start=79, end=242, label='value', score=1.0), label='reason', score=1.0) | |
| print(doc.binary_relations[0].resolve()) | |
| # ('reason', (('value', 'State and local court rules sometimes make default judgments much more likely.'), ('value', 'For example, when a person who allegedly owes a debt is told to come to court on a work day, they may be forced to choose between a default judgment and their job.'))) | |
| ``` | |
| ## Data Schema | |
| The document type for this dataset is `CDCPDocument` which defines the following data fields: | |
| - `text` (str) | |
| - `id` (str, optional) | |
| - `metadata` (dictionary, optional) | |
| and the following annotation layers: | |
| - `propositions` (annotation type: `LabeledSpan`, target: `text`) | |
| - `relations` (annotation type: `BinaryRelation`, target: `propositions`) | |
| - `urls` (annotation type: `Attribute`, target: `propositions`) | |
| See [here](https://github.com/ArneBinder/pie-documents/blob/main/src/pie_documents/annotations.py) for the annotation type definitions. | |
| ## Document Converters | |
| The dataset provides document converters for the following target document types: | |
| - `pie_documents.documents.TextDocumentWithLabeledSpansAndBinaryRelations` | |
| - `labeled_spans`: `LabeledSpan` annotations, converted from `CDCPDocument`'s `propositions` | |
| - labels: `fact`, `policy`, `reference`, `testimony`, `value` | |
| - if `propositions` contain whitespace at the beginning and/or the end, the whitespace are trimmed out. | |
| - `binary_relations`: `BinaryRelation` annotations, converted from `CDCPDocument`'s `relations` | |
| - labels: `reason`, `evidence` | |
| See [here](https://github.com/ArneBinder/pie-documents/blob/main/src/pie_documents/documents.py) for the document type | |
| definitions. | |
| ### Collected Statistics after Document Conversion | |
| We use the script `evaluate_documents.py` from [PyTorch-IE-Hydra-Template](https://github.com/ArneBinder/pytorch-ie-hydra-template-1) to generate these statistics. | |
| After checking out that code, the statistics and plots can be generated by the command: | |
| ```commandline | |
| python src/evaluate_documents.py dataset=cdcp_base metric=METRIC | |
| ``` | |
| where a `METRIC` is called according to the available metric configs in `config/metric/METRIC` (see [metrics](https://github.com/ArneBinder/pytorch-ie-hydra-template-1/tree/main/configs/metric)). | |
| This also requires to have the following dataset config in `configs/dataset/cdcp_base.yaml` of this dataset within the repo directory: | |
| ```commandline | |
| _target_: src.utils.execute_pipeline | |
| input: | |
| _target_: pie_datasets.DatasetDict.load_dataset | |
| path: pie/cdcp | |
| revision: 001722894bdca6df6a472d0d186a3af103e392c5 | |
| ``` | |
| For token based metrics, this uses `bert-base-uncased` from `transformer.AutoTokenizer` (see [AutoTokenizer](https://huggingface.co/docs/transformers/v4.37.1/en/model_doc/auto#transformers.AutoTokenizer), and [bert-based-uncased](https://huggingface.co/bert-base-uncased) to tokenize `text` in `TextDocumentWithLabeledSpansAndBinaryRelations` (see [document type](https://github.com/ArneBinder/pie-documents/blob/main/src/pie_documents/documents.py)). | |
| #### Relation argument (outer) token distance per label | |
| The distance is measured from the first token of the first argumentative unit to the last token of the last unit, a.k.a. outer distance. | |
| We collect the following statistics: number of documents in the split (*no. doc*), no. of relations (*len*), mean of token distance (*mean*), standard deviation of the distance (*std*), minimum outer distance (*min*), and maximum outer distance (*max*). | |
| We also present histograms in the collapsible, showing the distribution of these relation distances (x-axis; and unit-counts in y-axis), accordingly. | |
| <details> | |
| <summary>Command</summary> | |
| ``` | |
| python src/evaluate_documents.py dataset=cdcp_base metric=relation_argument_token_distances | |
| ``` | |
| </details> | |
| ##### train (580 documents) | |
| | | len | max | mean | min | std | | |
| | :------- | ---: | --: | -----: | --: | -----: | | |
| | ALL | 2204 | 240 | 48.839 | 8 | 31.462 | | |
| | evidence | 94 | 196 | 66.723 | 14 | 42.444 | | |
| | reason | 2110 | 240 | 48.043 | 8 | 30.64 | | |
| <details> | |
| <summary>Histogram (split: train, 580 documents)</summary> | |
|  | |
| </details> | |
| ##### test (150 documents) | |
| | | len | max | mean | min | std | | |
| | :------- | --: | --: | -----: | --: | -----: | | |
| | ALL | 648 | 212 | 51.299 | 8 | 31.159 | | |
| | evidence | 52 | 170 | 73.923 | 20 | 39.855 | | |
| | reason | 596 | 212 | 49.326 | 8 | 29.47 | | |
| <details> | |
| <summary>Histogram (split: test, 150 documents)</summary> | |
|  | |
| </details> | |
| #### Span lengths (tokens) | |
| The span length is measured from the first token of the first argumentative unit to the last token of the particular unit. | |
| We collect the following statistics: number of documents in the split (*no. doc*), no. of spans (*len*), mean of number of tokens in a span (*mean*), standard deviation of the number of tokens (*std*), minimum tokens in a span (*min*), and maximum tokens in a span (*max*). | |
| We also present histograms in the collapsible, showing the distribution of these token-numbers (x-axis; and unit-counts in y-axis), accordingly. | |
| <details> | |
| <summary>Command</summary> | |
| ``` | |
| python src/evaluate_documents.py dataset=cdcp_base metric=span_lengths_tokens | |
| ``` | |
| </details> | |
| | statistics | train | test | | |
| | :--------- | -----: | -----: | | |
| | no. doc | 580 | 150 | | |
| | len | 3901 | 1026 | | |
| | mean | 19.441 | 18.758 | | |
| | std | 11.71 | 10.388 | | |
| | min | 2 | 3 | | |
| | max | 142 | 83 | | |
| <details> | |
| <summary>Histogram (split: train, 580 documents)</summary> | |
|  | |
| </details> | |
| <details> | |
| <summary>Histogram (split: test, 150 documents)</summary> | |
|  | |
| </details> | |
| #### Token length (tokens) | |
| The token length is measured from the first token of the document to the last one. | |
| We collect the following statistics: number of documents in the split (*no. doc*), mean of document token-length (*mean*), standard deviation of the length (*std*), minimum number of tokens in a document (*min*), and maximum number of tokens in a document (*max*). | |
| We also present histograms in the collapsible, showing the distribution of these token lengths (x-axis; and unit-counts in y-axis), accordingly. | |
| <details> | |
| <summary>Command</summary> | |
| ``` | |
| python src/evaluate_documents.py dataset=cdcp_base metric=count_text_tokens | |
| ``` | |
| </details> | |
| | statistics | train | test | | |
| | :--------- | ------: | ------: | | |
| | no. doc | 580 | 150 | | |
| | mean | 130.781 | 128.673 | | |
| | std | 101.121 | 98.708 | | |
| | min | 13 | 15 | | |
| | max | 562 | 571 | | |
| <details> | |
| <summary>Histogram (split: train, 580 documents)</summary> | |
|  | |
| </details> | |
| <details> | |
| <summary>Histogram (split: test, 150 documents)</summary> | |
|  | |
| </details> | |