FALcon6 commited on
Commit
392890d
·
verified ·
1 Parent(s): 22e2242

Upload 5 files

Browse files
README.md ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - found
6
+ language:
7
+ - en
8
+ license:
9
+ - other
10
+ license_details: LDC User Agreement for Non-Members
11
+ multilinguality:
12
+ - monolingual
13
+ size_categories:
14
+ - 10K<n<100K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - text-generation
19
+ - fill-mask
20
+ task_ids:
21
+ - language-modeling
22
+ - masked-language-modeling
23
+ paperswithcode_id: null
24
+ pretty_name: Penn Treebank
25
+ dataset_info:
26
+ features:
27
+ - name: sentence
28
+ dtype: string
29
+ config_name: penn_treebank
30
+ splits:
31
+ - name: train
32
+ num_bytes: 5143706
33
+ num_examples: 42068
34
+ - name: test
35
+ num_bytes: 453710
36
+ num_examples: 3761
37
+ - name: validation
38
+ num_bytes: 403156
39
+ num_examples: 3370
40
+ download_size: 5951345
41
+ dataset_size: 6000572
42
+ ---
43
+
44
+ # Dataset Card for Penn Treebank
45
+
46
+ ## Table of Contents
47
+ - [Dataset Description](#dataset-description)
48
+ - [Dataset Summary](#dataset-summary)
49
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
50
+ - [Languages](#languages)
51
+ - [Dataset Structure](#dataset-structure)
52
+ - [Data Instances](#data-instances)
53
+ - [Data Fields](#data-fields)
54
+ - [Data Splits](#data-splits)
55
+ - [Dataset Creation](#dataset-creation)
56
+ - [Curation Rationale](#curation-rationale)
57
+ - [Source Data](#source-data)
58
+ - [Annotations](#annotations)
59
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
60
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
61
+ - [Social Impact of Dataset](#social-impact-of-dataset)
62
+ - [Discussion of Biases](#discussion-of-biases)
63
+ - [Other Known Limitations](#other-known-limitations)
64
+ - [Additional Information](#additional-information)
65
+ - [Dataset Curators](#dataset-curators)
66
+ - [Licensing Information](#licensing-information)
67
+ - [Citation Information](#citation-information)
68
+ - [Contributions](#contributions)
69
+
70
+ ## Dataset Description
71
+
72
+ - **Homepage:** https://catalog.ldc.upenn.edu/LDC99T42
73
+
74
+ - **Repository:** 'https://raw.githubusercontent.com/wojzaremba/lstm/master/data/ptb.train.txt',
75
+ 'https://raw.githubusercontent.com/wojzaremba/lstm/master/data/ptb.valid.txt',
76
+ 'https://raw.githubusercontent.com/wojzaremba/lstm/master/data/ptb.test.txt'
77
+ - **Paper:** https://www.aclweb.org/anthology/J93-2004.pdf
78
+ - **Leaderboard:** [Needs More Information]
79
+ - **Point of Contact:** [Needs More Information]
80
+
81
+ ### Dataset Summary
82
+
83
+ This is the Penn Treebank Project: Release 2 CDROM, featuring a million words of 1989 Wall Street Journal material.
84
+ The rare words in this version are already replaced with <unk> token. The numbers are replaced with <N> token.
85
+
86
+ ### Supported Tasks and Leaderboards
87
+
88
+ Language Modelling
89
+
90
+ ### Languages
91
+
92
+ The text in the dataset is in American English
93
+
94
+ ## Dataset Structure
95
+
96
+ ### Data Instances
97
+
98
+ [Needs More Information]
99
+
100
+ ### Data Fields
101
+
102
+ [Needs More Information]
103
+
104
+ ### Data Splits
105
+
106
+ [Needs More Information]
107
+
108
+ ## Dataset Creation
109
+
110
+ ### Curation Rationale
111
+
112
+ [Needs More Information]
113
+
114
+ ### Source Data
115
+
116
+ #### Initial Data Collection and Normalization
117
+
118
+ [Needs More Information]
119
+
120
+ #### Who are the source language producers?
121
+
122
+ [Needs More Information]
123
+
124
+ ### Annotations
125
+
126
+ #### Annotation process
127
+
128
+ [Needs More Information]
129
+
130
+ #### Who are the annotators?
131
+
132
+ [Needs More Information]
133
+
134
+ ### Personal and Sensitive Information
135
+
136
+ [Needs More Information]
137
+
138
+ ## Considerations for Using the Data
139
+
140
+ ### Social Impact of Dataset
141
+
142
+ [Needs More Information]
143
+
144
+ ### Discussion of Biases
145
+
146
+ [Needs More Information]
147
+
148
+ ### Other Known Limitations
149
+
150
+ [Needs More Information]
151
+
152
+ ## Additional Information
153
+
154
+ ### Dataset Curators
155
+
156
+ [Needs More Information]
157
+
158
+ ### Licensing Information
159
+
160
+ Dataset provided for research purposes only. Please check dataset license for additional information.
161
+
162
+ ### Citation Information
163
+
164
+ @article{marcus-etal-1993-building,
165
+ title = "Building a Large Annotated Corpus of {E}nglish: The {P}enn {T}reebank",
166
+ author = "Marcus, Mitchell P. and
167
+ Santorini, Beatrice and
168
+ Marcinkiewicz, Mary Ann",
169
+ journal = "Computational Linguistics",
170
+ volume = "19",
171
+ number = "2",
172
+ year = "1993",
173
+ url = "https://www.aclweb.org/anthology/J93-2004",
174
+ pages = "313--330",
175
+ }
176
+ ### Contributions
177
+
178
+ Thanks to [@harshalmittal4](https://github.com/harshalmittal4) for adding this dataset.
penn_treebank/test/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:06a92482c9c75f36a2fb30eee8da0ad4db5740f013c16c622f853e09ef27cb4f
3
+ size 261699
penn_treebank/train/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a53760d31bbf160f7974534b5769f69fd4383c07fb17be6a04f392abb4cab36
3
+ size 2961439
penn_treebank/validation/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9787220efe21c65cb730635963e7398933f23889c37cfcb50d728f2e59ac7c8f
3
+ size 235756
ptb_text_only.py ADDED
@@ -0,0 +1,146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """
16
+ Load the Penn Treebank dataset.
17
+
18
+ This is the Penn Treebank Project: Release 2 CDROM, featuring a million words of 1989 Wall
19
+ Street Journal material.
20
+ """
21
+
22
+
23
+ import datasets
24
+
25
+
26
+ # TODO: Add BibTeX citation
27
+ # Find for instance the citation on arxiv or on the dataset repo/website
28
+ _CITATION = """\
29
+ @article{marcus-etal-1993-building,
30
+ title = "Building a Large Annotated Corpus of {E}nglish: The {P}enn {T}reebank",
31
+ author = "Marcus, Mitchell P. and
32
+ Santorini, Beatrice and
33
+ Marcinkiewicz, Mary Ann",
34
+ journal = "Computational Linguistics",
35
+ volume = "19",
36
+ number = "2",
37
+ year = "1993",
38
+ url = "https://www.aclweb.org/anthology/J93-2004",
39
+ pages = "313--330",
40
+ }
41
+ """
42
+
43
+ # TODO: Add description of the dataset here
44
+ # You can copy an official description
45
+ _DESCRIPTION = """\
46
+ This is the Penn Treebank Project: Release 2 CDROM, featuring a million words of 1989 Wall Street Journal material. This corpus has been annotated for part-of-speech (POS) information. In addition, over half of it has been annotated for skeletal syntactic structure.
47
+ """
48
+
49
+ # TODO: Add a link to an official homepage for the dataset here
50
+ _HOMEPAGE = "https://catalog.ldc.upenn.edu/LDC99T42"
51
+
52
+ # TODO: Add the licence for the dataset here if you can find it
53
+ _LICENSE = "LDC User Agreement for Non-Members"
54
+
55
+ # TODO: Add link to the official dataset URLs here
56
+ # The HuggingFace dataset library don't host the datasets but only point to the original files
57
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
58
+ _URL = "https://raw.githubusercontent.com/wojzaremba/lstm/master/data/"
59
+ _TRAINING_FILE = "ptb.train.txt"
60
+ _DEV_FILE = "ptb.valid.txt"
61
+ _TEST_FILE = "ptb.test.txt"
62
+
63
+
64
+ class PtbTextOnlyConfig(datasets.BuilderConfig):
65
+ """BuilderConfig for PtbTextOnly"""
66
+
67
+ def __init__(self, **kwargs):
68
+ """BuilderConfig PtbTextOnly.
69
+ Args:
70
+ **kwargs: keyword arguments forwarded to super.
71
+ """
72
+ super(PtbTextOnlyConfig, self).__init__(**kwargs)
73
+
74
+
75
+ class PtbTextOnly(datasets.GeneratorBasedBuilder):
76
+ """Load the Penn Treebank dataset."""
77
+
78
+ VERSION = datasets.Version("1.1.0")
79
+
80
+ # This is an example of a dataset with multiple configurations.
81
+ # If you don't want/need to define several sub-sets in your dataset,
82
+ # just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
83
+
84
+ # If you need to make complex sub-parts in the datasets with configurable options
85
+ # You can create your own builder configuration class to store attribute, inheriting from datasets.BuilderConfig
86
+ # BUILDER_CONFIG_CLASS = MyBuilderConfig
87
+
88
+ # You will be able to load one or the other configurations in the following list with
89
+ # data = datasets.load_dataset('my_dataset', 'first_domain')
90
+ # data = datasets.load_dataset('my_dataset', 'second_domain')
91
+ BUILDER_CONFIGS = [
92
+ PtbTextOnlyConfig(
93
+ name="penn_treebank",
94
+ version=VERSION,
95
+ description="Load the Penn Treebank dataset",
96
+ ),
97
+ ]
98
+
99
+ def _info(self):
100
+ features = datasets.Features({"sentence": datasets.Value("string")})
101
+ return datasets.DatasetInfo(
102
+ # This is the description that will appear on the datasets page.
103
+ description=_DESCRIPTION,
104
+ # This defines the different columns of the dataset and their types
105
+ features=features, # Here we define them above because they are different between the two configurations
106
+ # If there's a common (input, target) tuple from the features,
107
+ # specify them here. They'll be used if as_supervised=True in
108
+ # builder.as_dataset.
109
+ supervised_keys=None,
110
+ # Homepage of the dataset for documentation
111
+ homepage=_HOMEPAGE,
112
+ # License for the dataset if available
113
+ license=_LICENSE,
114
+ # Citation for the dataset
115
+ citation=_CITATION,
116
+ )
117
+
118
+ def _split_generators(self, dl_manager):
119
+ """Returns SplitGenerators."""
120
+ # TODO: This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
121
+ # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
122
+
123
+ # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLs
124
+ # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
125
+ # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
126
+ my_urls = {
127
+ "train": f"{_URL}{_TRAINING_FILE}",
128
+ "dev": f"{_URL}{_DEV_FILE}",
129
+ "test": f"{_URL}{_TEST_FILE}",
130
+ }
131
+ data_dir = dl_manager.download_and_extract(my_urls)
132
+ return [
133
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": data_dir["train"]}),
134
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": data_dir["test"]}),
135
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": data_dir["dev"]}),
136
+ ]
137
+
138
+ def _generate_examples(self, filepath):
139
+ """Yields examples."""
140
+ # TODO: This method will receive as arguments the `gen_kwargs` defined in the previous `_split_generators` method.
141
+ # It is in charge of opening the given file and yielding (key, example) tuples from the dataset
142
+ # The key is not important, it's more here for legacy reason (legacy from tfds)
143
+ with open(filepath, encoding="utf-8") as f:
144
+ for id_, line in enumerate(f):
145
+ line = line.strip()
146
+ yield id_, {"sentence": line}