Joelito commited on
Commit
46a7793
·
1 Parent(s): c853db6

Upload 5 files

Browse files
Files changed (5) hide show
  1. README.md +125 -0
  2. data/test.jsonl.xz +3 -0
  3. data/train.jsonl.xz +3 -0
  4. data/valid.jsonl.xz +3 -0
  5. prepare_data.py +28 -0
README.md ADDED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ ---
4
+ # Dataset Card for MiningLegalArguments
5
+
6
+ ## Table of Contents
7
+ - [Table of Contents](#table-of-contents)
8
+ - [Dataset Description](#dataset-description)
9
+ - [Dataset Summary](#dataset-summary)
10
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
11
+ - [Languages](#languages)
12
+ - [Dataset Structure](#dataset-structure)
13
+ - [Data Instances](#data-instances)
14
+ - [Data Fields](#data-fields)
15
+ - [Data Splits](#data-splits)
16
+ - [Dataset Creation](#dataset-creation)
17
+ - [Curation Rationale](#curation-rationale)
18
+ - [Source Data](#source-data)
19
+ - [Annotations](#annotations)
20
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
21
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
22
+ - [Social Impact of Dataset](#social-impact-of-dataset)
23
+ - [Discussion of Biases](#discussion-of-biases)
24
+ - [Other Known Limitations](#other-known-limitations)
25
+ - [Additional Information](#additional-information)
26
+ - [Dataset Curators](#dataset-curators)
27
+ - [Licensing Information](#licensing-information)
28
+ - [Citation Information](#citation-information)
29
+ - [Contributions](#contributions)
30
+
31
+ ## Dataset Description
32
+
33
+ - **Homepage:** [GitHub](https://github.com/eliasjacob/paper_brcad5/)
34
+ - **Repository:** [Kaggle](https://www.kaggle.com/datasets/eliasjacob/brcad5)
35
+ - **Paper:** [PLOS ONE](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0272287)
36
+ - **Leaderboard:**
37
+ - **Point of Contact:**
38
+
39
+ ### Dataset Summary
40
+
41
+ [More Information Needed]
42
+
43
+ ### Supported Tasks and Leaderboards
44
+
45
+ [More Information Needed]
46
+
47
+ ### Languages
48
+
49
+ [More Information Needed]
50
+
51
+ ## Dataset Structure
52
+
53
+ ### Data Instances
54
+
55
+ [More Information Needed]
56
+
57
+ ### Data Fields
58
+
59
+ [More Information Needed]
60
+
61
+ ### Data Splits
62
+
63
+ [More Information Needed]
64
+
65
+ ## Dataset Creation
66
+
67
+ ### Curation Rationale
68
+
69
+ [More Information Needed]
70
+
71
+ ### Source Data
72
+
73
+ #### Initial Data Collection and Normalization
74
+
75
+ [More Information Needed]
76
+
77
+ #### Who are the source language producers?
78
+
79
+ [More Information Needed]
80
+
81
+ ### Annotations
82
+
83
+ #### Annotation process
84
+
85
+ [More Information Needed]
86
+
87
+ #### Who are the annotators?
88
+
89
+ [More Information Needed]
90
+
91
+ ### Personal and Sensitive Information
92
+
93
+ [More Information Needed]
94
+
95
+ ## Considerations for Using the Data
96
+
97
+ ### Social Impact of Dataset
98
+
99
+ [More Information Needed]
100
+
101
+ ### Discussion of Biases
102
+
103
+ [More Information Needed]
104
+
105
+ ### Other Known Limitations
106
+
107
+ [More Information Needed]
108
+
109
+ ## Additional Information
110
+
111
+ ### Dataset Curators
112
+
113
+ [More Information Needed]
114
+
115
+ ### Licensing Information
116
+
117
+ [More Information Needed]
118
+
119
+ ### Citation Information
120
+
121
+ [More Information Needed]
122
+
123
+ ### Contributions
124
+
125
+ Thanks to [@JoelNiklaus](https://github.com/JoelNiklaus) for adding this dataset.
data/test.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:640790c603b42cdf30f9cc5d71b81ddb0f62b49c3d50041b6208a345b66d8463
3
+ size 130133552
data/train.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4205b3b1e498bcf4e5c97d72aafd5c0e73bfd5ab62100aacf93c523f7de78478
3
+ size 697755220
data/valid.jsonl.xz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5153daa314522681121ea29324268a8c7d6d6f3583f1c94abd3e01992302e6ff
3
+ size 130310888
prepare_data.py ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pandas as pd
2
+ import os
3
+ from typing import Union
4
+
5
+ import datasets
6
+ from datasets import load_dataset
7
+
8
+
9
+ def save_and_compress(dataset: Union[datasets.Dataset, pd.DataFrame], name: str, idx=None):
10
+ if idx:
11
+ path = f"{name}_{idx}.jsonl"
12
+ else:
13
+ path = f"{name}.jsonl"
14
+
15
+ print("Saving to", path)
16
+ dataset.to_json(path, force_ascii=False, orient='records', lines=True)
17
+
18
+ print("Compressing...")
19
+ os.system(f'xz -zkf -T0 --memlimit-compress=60% {path}') # -TO to use multithreading
20
+
21
+
22
+ for split in ["train", "valid", "test"]:
23
+ dataset = load_dataset("parquet", data_files=f"original_data/{split}_en.parquet", split="train")
24
+ dataset = dataset.remove_columns(['case_marked_as_closed']) # this column brings problems
25
+ # these are also potentially an issue: overflows
26
+ dataset = dataset.remove_columns(['filing_date', 'date_first_instance_ruling', 'date_appeal_panel_session'])
27
+ print(dataset[0])
28
+ save_and_compress(dataset, f"data/{split}")