Yuan Chuan Kee commited on
Commit
c3c8e60
·
1 Parent(s): 898e7f2

Initial commit with data

Browse files
.gitattributes CHANGED
@@ -25,3 +25,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ data/jstor.jsonl.gz filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ YAML tags:
3
+ - copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
4
+ ---
5
+
6
+ # Dataset Card for annotated_reference_strings
7
+
8
+ ## Table of Contents
9
+ - [Table of Contents](#table-of-contents)
10
+ - [Dataset Description](#dataset-description)
11
+ - [Dataset Summary](#dataset-summary)
12
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
13
+ - [Languages](#languages)
14
+ - [Dataset Structure](#dataset-structure)
15
+ - [Data Instances](#data-instances)
16
+ - [Data Fields](#data-fields)
17
+ - [Data Splits](#data-splits)
18
+ - [Dataset Creation](#dataset-creation)
19
+ - [Curation Rationale](#curation-rationale)
20
+ - [Source Data](#source-data)
21
+ - [Annotations](#annotations)
22
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
23
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
24
+ - [Social Impact of Dataset](#social-impact-of-dataset)
25
+ - [Discussion of Biases](#discussion-of-biases)
26
+ - [Other Known Limitations](#other-known-limitations)
27
+ - [Additional Information](#additional-information)
28
+ - [Dataset Curators](#dataset-curators)
29
+ - [Licensing Information](#licensing-information)
30
+ - [Citation Information](#citation-information)
31
+ - [Contributions](#contributions)
32
+
33
+ ## Dataset Description
34
+
35
+ - **Homepage:** [https://www.github.com/kylase](https://www.github.com/kylase)
36
+ - **Repository:** [https://www.github.com/kylase](https://www.github.com/kylase)
37
+ - **Point of Contact:** [Yuan Chuan Kee](https://www.github.com/kylase)
38
+
39
+ ### Dataset Summary
40
+
41
+ The `annotated_reference_strings` dataset comprises millions of the annotated reference strings, i.e. each token of the strings have an associated label such as author, title, year, etc.
42
+
43
+ These strings are synthesized using citation processor on millions of citations obtained from various sources, spanning different scientific domains.
44
+
45
+ ### Supported Tasks
46
+
47
+ This dataset can be used for structure prediction.
48
+
49
+ ### Languages
50
+
51
+ The dataset is composed of reference strings that are in English.
52
+
53
+ ## Dataset Structure
54
+
55
+ ### Data Instances
56
+
57
+ ```json
58
+ {
59
+ "source": "pubmed",
60
+ "lang": "en",
61
+ "entry_type": "article",
62
+ "doi_prefix": "pubmed19n0001",
63
+ "csl_style": "annual-reviews",
64
+ "content": "<citation-number>8.</citation-number> <author>Mohr W.</author> <year>1977.</year> <title>[Morphology of bone tumors. 2. Morphology of benign bone tumors].</title> <container-title>Aktuelle Probleme in Chirurgie und Orthopadie.</container-title> <volume>5:</volume> <page>29–42</page>"
65
+ }
66
+ ```
67
+
68
+ **Important Note:** Each citation is synthesized to _at most_ **17** CSL styles.
69
+ Therefore, there will be near duplicates.
70
+
71
+ All characters are enclosed by the tag. Only tokens that act as "conjunctions" are not enclosed in tags.
72
+
73
+ Do note that, there will be instances where a token can be annotated to a hierarchical tag e.g. `accessed.year`.
74
+ This depends on the author(s) of the CSL styles.
75
+
76
+ ### Data Fields
77
+
78
+ - `source`: Describe the source of the citation. `{pubmed, jstor, crossref}`
79
+ - `lang`: Describe the language of the citation. `{en}`
80
+ - `entry_type`: Describe the BibTeX entry type. `{article, book, inbook, misc, techreport, phdthesis, incollection, inproceedings}`
81
+ - `doi_prefix`: For JSTOR and CrossRef, it is the prefix of the DOI. For PubMed, it is the directory (e.g. `pubmed19nXXXX` where `XXXX` is 4 digits) of which the citation is generated from.
82
+ - `csl_style`: The CSL style which the citation is rendered as.
83
+ - `content`: The rendered citation of a specific style with each segment enclosed by tags named after the CSL variables
84
+
85
+ ### Data Splits
86
+ s
87
+ Data splits are not available yet.
88
+
89
+ ## Dataset Creation
90
+
91
+ ### Source Data
92
+
93
+ #### Initial Data Collection and Normalization
94
+
95
+ The citations that are used to generate these reference strings are obtained from 3 main sources:
96
+
97
+ - [PubMed](https://www.nlm.nih.gov/databases/download/pubmed_medline.html) (2019 Baseline)
98
+ - CrossRef via [Open Academic Graph v2](https://www.microsoft.com/en-us/research/project/open-academic-graph/)
99
+ - JSTOR Sample Datasets (not available online as of publication date)
100
+
101
+ If the citation is not in BibTeX format, [bibutils](https://sourceforge.net/p/bibutils/home/Bibutils/) is used to convert it to BibTeX.
102
+
103
+ #### Who are the source language producers?
104
+
105
+ The manner which the citations are rendered as reference strings are based on rules/specifications dictated by the publisher.
106
+ [Citation Style Language](https://citationstyles.org/) (CSL) is an established standard which such specifications are prescribed.
107
+ Thousands of citation styles are available.
108
+
109
+ ### Annotations
110
+
111
+ #### Annotation process
112
+
113
+ The annotation process involves 2 main interventions:
114
+ 1. Modification of the styles' CSL specification to inject the CSL variable names as part of the render process
115
+ 2. Sanitization of the rendered strings using regular expressions to ensure all tokens and characters are enclosed in the tags
116
+
117
+ #### Who are the annotators?
118
+
119
+ The original CSL specification are available on [GitHub](https://github.com/citation-style-language/styles).
120
+
121
+ The modification of the styles and the sanitization process are done by the author of this work.
122
+
123
+ ## Additional Information
124
+
125
+ ### Licensing Information
126
+
127
+ This dataset is licensed under [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/).
128
+
129
+ ### Citation Information
130
+
131
+ This dataset is a product of a Master Project done in the National University of Singapore.
132
+
133
+ If you are using it, please cite the following:
134
+
135
+ ```bibtex
136
+ @techreport{kee2021,
137
+ author = {Yuan Chuan Kee},
138
+ title = {Synthesis of a large dataset of annotated reference strings for developing citation parsers},
139
+ institution = {National University of Singapore},
140
+ year = {2021}
141
+ }
142
+ ```
143
+
144
+ ### Contributions
145
+
146
+ Thanks to [@kylase](https://github.com/kylase) for adding this dataset.
annotated_reference_strings.py ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """\
2
+ Annotated Reference Strings dataset synthesized using CSL processor on citations obtained from CrossRef, JSTOR and
3
+ PubMed
4
+ """
5
+
6
+ import gzip
7
+ import json
8
+ import os
9
+
10
+ import datasets
11
+
12
+
13
+ _CITATION = """\
14
+ @techreport{kee2021,
15
+ author = {Yuan Chuan Kee},
16
+ title = {Synthesis of a large dataset of annotated reference strings for developing citation parsers},
17
+ institution = {National University of Singapore},
18
+ year = {2021}
19
+ }
20
+ """
21
+
22
+ # TODO: Add description of the dataset here
23
+ # You can copy an official description
24
+ _DESCRIPTION = """\
25
+ This new dataset is designed to solve this great NLP task and is crafted with a lot of care.
26
+ """
27
+
28
+ # TODO: Add a link to an official homepage for the dataset here
29
+ _HOMEPAGE = ""
30
+
31
+ # TODO: Add the licence for the dataset here if you can find it
32
+ _LICENSE = ""
33
+
34
+ # TODO: Add link to the official dataset URLs here
35
+ # The HuggingFace dataset library don't host the datasets but only point to the original files
36
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
37
+ _BASE_URL = "https://huggingface.co/datasets/yuanchuan/annotated_reference_strings"
38
+ _URLs = {
39
+ "default": [f"{_BASE_URL}/resolve/main/data/jstor.jsonl.gz"]
40
+ }
41
+
42
+
43
+ class AnnotatedReferenceStringsDataset(datasets.GeneratorBasedBuilder):
44
+ """Annotated Reference Strings dataset"""
45
+
46
+ VERSION = datasets.Version("0.1.0")
47
+
48
+ # This is an example of a dataset with multiple configurations.
49
+ # If you don't want/need to define several sub-sets in your dataset,
50
+ # just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
51
+
52
+ # If you need to make complex sub-parts in the datasets with configurable options
53
+ # You can create your own builder configuration class to store attribute, inheriting from datasets.BuilderConfig
54
+ # BUILDER_CONFIG_CLASS = MyBuilderConfig
55
+
56
+ # You will be able to load one or the other configurations in the following list with
57
+ # data = datasets.load_dataset('my_dataset', 'first_domain')
58
+ # data = datasets.load_dataset('my_dataset', 'second_domain')
59
+ BUILDER_CONFIGS = [
60
+ datasets.BuilderConfig(name="default", version=VERSION,
61
+ description="This dataset is the raw representation without tokenization."),
62
+ ]
63
+
64
+ DEFAULT_CONFIG_NAME = "default"
65
+
66
+ def _info(self):
67
+ features = datasets.Features(
68
+ {
69
+ "source": datasets.Value("string"),
70
+ "lang": datasets.Value("string"),
71
+ "entry_type": datasets.Value("string"),
72
+ "doi_prefix": datasets.Value("string"),
73
+ "csl_style": datasets.Value("string"),
74
+ "content": datasets.Value("string")
75
+ }
76
+ )
77
+
78
+ return datasets.DatasetInfo(
79
+ description=_DESCRIPTION,
80
+ # This defines the different columns of the dataset and their types
81
+ features=features, # Here we define them above because they are different between the two configurations
82
+ # If there's a common (input, target) tuple from the features,
83
+ # specify them here. They'll be used if as_supervised=True in
84
+ # builder.as_dataset.
85
+ supervised_keys=None,
86
+ # Homepage of the dataset for documentation
87
+ homepage=_HOMEPAGE,
88
+ # License for the dataset if available
89
+ license=_LICENSE,
90
+ # Citation for the dataset
91
+ citation=_CITATION,
92
+ )
93
+
94
+ def _split_generators(self, dl_manager):
95
+ """Returns SplitGenerators."""
96
+ # TODO: This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
97
+ # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
98
+
99
+ # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLs
100
+ # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
101
+ # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
102
+ data_urls = _URLs[self.config.name]
103
+ files = dl_manager.download(data_urls)
104
+ return [
105
+ datasets.SplitGenerator(
106
+ name=datasets.Split.TRAIN,
107
+ gen_kwargs={
108
+ "filepaths": files,
109
+ "split": "train",
110
+ },
111
+ )
112
+ ]
113
+
114
+ def _generate_examples(self, filepaths, split):
115
+ id_ = 0
116
+
117
+ for filepath in filepaths:
118
+ with gzip.open(open(filepath, "rb"), "rt", encoding="utf-8") as f:
119
+ for line in f:
120
+ if line:
121
+ example = json.loads(line)
122
+ yield id_, example
123
+ id_ += 1
data/jstor.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61121479456b8c63a855be753f05af39bf1e83dd4e265f9fa35eaaad8401a6fb
3
+ size 180552601