Datasets:

Modalities:
Text
ArXiv:
Libraries:
Datasets
parquet-converter commited on
Commit
bb33d21
·
1 Parent(s): 9617780

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,279 +0,0 @@
1
- A dataset for benchmarking keyphrase extraction and generation techniques from abstracts of English scientific papers. For more details about the dataset please refer the original paper - [https://dl.acm.org/doi/pdf/10.3115/1119355.1119383](https://dl.acm.org/doi/pdf/10.3115/1119355.1119383).
2
-
3
- Data source - [https://github.com/boudinfl/ake-datasets/tree/master/datasets/Inspec](https://github.com/boudinfl/ake-datasets/tree/master/datasets/Inspec)
4
-
5
- ## Dataset Summary
6
- The Inspec dataset was originally proposed by *Hulth* in the paper titled - [Improved automatic keyword extraction given more linguistic knowledge](https://aclanthology.org/W03-1028.pdf) in the year 2003. The dataset consists of abstracts of 2,000 English scientific papers from the [Inspec database](https://clarivate.com/webofsciencegroup/solutions/webofscience-inspec/). The abstracts are from papers belonging to the scientific domains of *Computers and Control* and *Information Technology* published between 1998 to 2002. Each abstract has two sets of keyphrases annotated by professional indexers - *controlled* and *uncontrolled*. The *controlled* keyphrases are obtained from the Inspec thesaurus and therefore are often not present in the abstract's text. Only 18.1% of the *controlled* keyphrases are actually present in the abstract's text. The *uncontrolled* keyphrases are those selected by the indexers after reading the full-length scientific articles and 76.2% of them are present in the abstract's text. There is no information in the original paper about how these 2,000 scientific papers were selected. It is unknown whether the papers were randomly selected out of all the papers published between 1998-2002 in the *Computers and Control* and *Information Technology* domains or were there only 2,000 papers in this domain that were indexed by Inspec. The train, dev and test splits of the data were arbitrarily chosen.
7
-
8
- One of the key aspect of this dataset which makes it unique is that it provides keyphrases assigned by professional indexers, which is uncommon in the keyphrase literature. Most of the datasets in this domain have author assigned keyphrases as the ground truth. The dataset shared over here does not explicitly presents the *controlled* and *uncontrolled* keyphrases instead it only categorizes the keyphrases into *extractive* and *abstractive*. **Extractive keyphrases** are those that could be found in the input text and the **abstractive keyphrases** are those that are not present in the input text. In order to get all the meta-data about the documents and keyphrases please refer to the [original source](https://github.com/boudinfl/ake-datasets/tree/master/datasets/Inspec) from which the dataset was taken from. The main motivation behind making this dataset available in the form as presented over here is to make it easy for the researchers to programmatically download it and evaluate their models for the tasks of keyphrase extraction and generation. As keyphrase extraction by treating it as a sequence tagging task and using contextual language models has become popular - [Keyphrase extraction from scholarly articles as sequence labeling using contextualized embeddings](https://arxiv.org/pdf/1910.08840.pdf), we have also made the token tags available in the BIO tagging format.
9
-
10
- ## Dataset Structure
11
-
12
- ## Dataset Statistics
13
- Table 1: Statistics on the length of the abstractive keyphrases for Train, Test, and Validation splits of Inspec dataset.
14
-
15
- | | Train | Test | Validation |
16
- |:---------------:|:-----------------------:|:----------------------:|:------------------------:|
17
- | Single word | 9.0% | 9.5% | 10.1% |
18
- | Two words | 50.4% | 48.2% | 45.7% |
19
- | Three words | 27.6% | 28.6% | 29.8% |
20
- | Four words | 9.3% | 10.3% | 10.3% |
21
- | Five words | 2.4% | 2.0% | 3.2% |
22
- | Six words | 0.9% | 1.2% | 0.7% |
23
- | Seven words | 0.3% | 0.2% | 0.2% |
24
- | Eight words | 0.1% | 0% | 0.1% |
25
- | Nine words | 0% | 0.1% | 0% |
26
-
27
- Table 2: Statistics on the length of the extractive keyphrases for Train, Test, and Validation splits of Inspec dataset.
28
-
29
- | | Train | Test | Validation |
30
- |:------------:|:-----------------------:|:----------------------:|:------------------------:|
31
- | Single word | 16.2% | 15.4% | 17.0% |
32
- | Two words | 52.4% | 54.8% | 51.6% |
33
- | Three words | 24.3% | 22.99% | 24.3% |
34
- | Four words | 5.6% | 4.96% | 5.8% |
35
- | Five words | 1.2% | 1.3% | 1.1% |
36
- | Six words | 0.2% | 0.36% | 0.2% |
37
- | Seven words | 0.1% | 0.06% | 0.1% |
38
- | Eight words | 0% | 0% | 0.03% |
39
-
40
- Table 3: General statistics of the Inspec dataset.
41
-
42
- | Type of Analysis | Train | Test | Validation |
43
- |:----------------------------------------------:|:------------------------------:|:------------------------------:|:------------------------------:|
44
- | Annotator Type | Professional Indexers | Professional Indexers | Professional Indexers |
45
- | Document Type | Abstracts from Inspec Database | Abstracts from Inspec Database | Abstracts from Inspec Database |
46
- | No. of Documents | 1000 | 500 | 500 |
47
- | Avg. Document length (words) | 141.5 | 134.6 | 132.6 |
48
- | Max Document length (words) | 557 | 384 | 330 |
49
- | Max no. of abstractive keyphrases in a document | 17 | 20 | 14 |
50
- | Min no. of abstractive keyphrases in a document | 0 | 0 | 0 |
51
- | Avg. no. of abstractive keyphrases per document | 3.39 | 3.26 | 3.12 |
52
- | Max no. of extractive keyphrases in a document | 24 | 27 | 22 |
53
- | Min no. of extractive keyphrases in a document | 0 | 0 | 0 |
54
- | Avg. no. of extractive keyphrases per document | 6.39 | 6.56 | 5.95 |
55
-
56
-
57
- - Percentage of keyphrases that are named entities: 55.25% (named entities detected using scispacy - en-core-sci-lg model)
58
-
59
- - Percentage of keyphrases that are noun phrases: 73.59% (noun phrases detected using spacy after removing determiners)
60
-
61
-
62
- ### Data Fields
63
-
64
- - **id**: unique identifier of the document.
65
- - **document**: Whitespace separated list of words in the document.
66
- - **doc_bio_tags**: BIO tags for each word in the document. B stands for the beginning of a keyphrase and I stands for inside the keyphrase. O stands for outside the keyphrase and represents the word that isn't a part of the keyphrase at all.
67
- - **extractive_keyphrases**: List of all the present keyphrases.
68
- - **abstractive_keyphrase**: List of all the absent keyphrases.
69
-
70
-
71
- ### Data Splits
72
-
73
- |Split| No. of datapoints |
74
- |--|--|
75
- | Train | 1,000 |
76
- | Test | 500 |
77
- | Validation | 500 |
78
-
79
- ## Usage
80
-
81
- ### Full Dataset
82
-
83
- ```python
84
- from datasets import load_dataset
85
-
86
- # get entire dataset
87
- dataset = load_dataset("midas/inspec", "raw")
88
-
89
- # sample from the train split
90
- print("Sample from training dataset split")
91
- train_sample = dataset["train"][0]
92
- print("Fields in the sample: ", [key for key in train_sample.keys()])
93
- print("Tokenized Document: ", train_sample["document"])
94
- print("Document BIO Tags: ", train_sample["doc_bio_tags"])
95
- print("Extractive/present Keyphrases: ", train_sample["extractive_keyphrases"])
96
- print("Abstractive/absent Keyphrases: ", train_sample["abstractive_keyphrases"])
97
- print("\n-----------\n")
98
-
99
- # sample from the validation split
100
- print("Sample from validation dataset split")
101
- validation_sample = dataset["validation"][0]
102
- print("Fields in the sample: ", [key for key in validation_sample.keys()])
103
- print("Tokenized Document: ", validation_sample["document"])
104
- print("Document BIO Tags: ", validation_sample["doc_bio_tags"])
105
- print("Extractive/present Keyphrases: ", validation_sample["extractive_keyphrases"])
106
- print("Abstractive/absent Keyphrases: ", validation_sample["abstractive_keyphrases"])
107
- print("\n-----------\n")
108
-
109
- # sample from the test split
110
- print("Sample from test dataset split")
111
- test_sample = dataset["test"][0]
112
- print("Fields in the sample: ", [key for key in test_sample.keys()])
113
- print("Tokenized Document: ", test_sample["document"])
114
- print("Document BIO Tags: ", test_sample["doc_bio_tags"])
115
- print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
116
- print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
117
- print("\n-----------\n")
118
- ```
119
- **Output**
120
-
121
- ```bash
122
- Sample from training data split
123
- Fields in the sample: ['id', 'document', 'doc_bio_tags', 'extractive_keyphrases', 'abstractive_keyphrases', 'other_metadata']
124
- Tokenized Document: ['A', 'conflict', 'between', 'language', 'and', 'atomistic', 'information', 'Fred', 'Dretske', 'and', 'Jerry', 'Fodor', 'are', 'responsible', 'for', 'popularizing', 'three', 'well-known', 'theses', 'in', 'contemporary', 'philosophy', 'of', 'mind', ':', 'the', 'thesis', 'of', 'Information-Based', 'Semantics', '-LRB-', 'IBS', '-RRB-', ',', 'the', 'thesis', 'of', 'Content', 'Atomism', '-LRB-', 'Atomism', '-RRB-', 'and', 'the', 'thesis', 'of', 'the', 'Language', 'of', 'Thought', '-LRB-', 'LOT', '-RRB-', '.', 'LOT', 'concerns', 'the', 'semantically', 'relevant', 'structure', 'of', 'representations', 'involved', 'in', 'cognitive', 'states', 'such', 'as', 'beliefs', 'and', 'desires', '.', 'It', 'maintains', 'that', 'all', 'such', 'representations', 'must', 'have', 'syntactic', 'structures', 'mirroring', 'the', 'structure', 'of', 'their', 'contents', '.', 'IBS', 'is', 'a', 'thesis', 'about', 'the', 'nature', 'of', 'the', 'relations', 'that', 'connect', 'cognitive', 'representations', 'and', 'their', 'parts', 'to', 'their', 'contents', '-LRB-', 'semantic', 'relations', '-RRB-', '.', 'It', 'holds', 'that', 'these', 'relations', 'supervene', 'solely', 'on', 'relations', 'of', 'the', 'kind', 'that', 'support', 'information', 'content', ',', 'perhaps', 'with', 'some', 'help', 'from', 'logical', 'principles', 'of', 'combination', '.', 'Atomism', 'is', 'a', 'thesis', 'about', 'the', 'nature', 'of', 'the', 'content', 'of', 'simple', 'symbols', '.', 'It', 'holds', 'that', 'each', 'substantive', 'simple', 'symbol', 'possesses', 'its', 'content', 'independently', 'of', 'all', 'other', 'symbols', 'in', 'the', 'representational', 'system', '.', 'I', 'argue', 'that', 'Dretske', "'s", 'and', 'Fodor', "'s", 'theories', 'are', 'false', 'and', 'that', 'their', 'falsehood', 'results', 'from', 'a', 'conflict', 'IBS', 'and', 'Atomism', ',', 'on', 'the', 'one', 'hand', ',', 'and', 'LOT', ',', 'on', 'the', 'other']
125
- Document BIO Tags: ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'I', 'O', 'B', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'B', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O']
126
- Extractive/present Keyphrases: ['philosophy of mind', 'content atomism', 'ibs', 'language of thought', 'lot', 'cognitive states', 'beliefs', 'desires']
127
- Abstractive/absent Keyphrases: ['information-based semantics']
128
-
129
- -----------
130
-
131
- Sample from validation data split
132
- Fields in the sample: ['id', 'document', 'doc_bio_tags', 'extractive_keyphrases', 'abstractive_keyphrases', 'other_metadata']
133
- Tokenized Document: ['Impact', 'of', 'aviation', 'highway-in-the-sky', 'displays', 'on', 'pilot', 'situation', 'awareness', 'Thirty-six', 'pilots', '-LRB-', '31', 'men', ',', '5', 'women', '-RRB-', 'were', 'tested', 'in', 'a', 'flight', 'simulator', 'on', 'their', 'ability', 'to', 'intercept', 'a', 'pathway', 'depicted', 'on', 'a', 'highway-in-the-sky', '-LRB-', 'HITS', '-RRB-', 'display', '.', 'While', 'intercepting', 'and', 'flying', 'the', 'pathway', ',', 'pilots', 'were', 'required', 'to', 'watch', 'for', 'traffic', 'outside', 'the', 'cockpit', '.', 'Additionally', ',', 'pilots', 'were', 'tested', 'on', 'their', 'awareness', 'of', 'speed', ',', 'altitude', ',', 'and', 'heading', 'during', 'the', 'flight', '.', 'Results', 'indicated', 'that', 'the', 'presence', 'of', 'a', 'flight', 'guidance', 'cue', 'significantly', 'improved', 'flight', 'path', 'awareness', 'while', 'intercepting', 'the', 'pathway', ',', 'but', 'significant', 'practice', 'effects', 'suggest', 'that', 'a', 'guidance', 'cue', 'might', 'be', 'unnecessary', 'if', 'pilots', 'are', 'given', 'proper', 'training', '.', 'The', 'amount', 'of', 'time', 'spent', 'looking', 'outside', 'the', 'cockpit', 'while', 'using', 'the', 'HITS', 'display', 'was', 'significantly', 'less', 'than', 'when', 'using', 'conventional', 'aircraft', 'instruments', '.', 'Additionally', ',', 'awareness', 'of', 'flight', 'information', 'present', 'on', 'the', 'HITS', 'display', 'was', 'poor', '.', 'Actual', 'or', 'potential', 'applications', 'of', 'this', 'research', 'include', 'guidance', 'for', 'the', 'development', 'of', 'perspective', 'flight', 'display', 'standards', 'and', 'as', 'a', 'basis', 'for', 'flight', 'training', 'requirements']
134
- Document BIO Tags: ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'B', 'I', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O']
135
- Extractive/present Keyphrases: ['flight simulator', 'pilots', 'cockpit', 'flight guidance', 'situation awareness', 'flight path awareness']
136
- Abstractive/absent Keyphrases: ['highway-in-the-sky display', 'human factors', 'aircraft display']
137
-
138
- -----------
139
-
140
- Sample from test data split
141
- Fields in the sample: ['id', 'document', 'doc_bio_tags', 'extractive_keyphrases', 'abstractive_keyphrases', 'other_metadata']
142
- Tokenized Document: ['A', 'new', 'graphical', 'user', 'interface', 'for', 'fast', 'construction', 'of', 'computation', 'phantoms', 'and', 'MCNP', 'calculations', ':', 'application', 'to', 'calibration', 'of', 'in', 'vivo', 'measurement', 'systems', 'Reports', 'on', 'a', 'new', 'utility', 'for', 'development', 'of', 'computational', 'phantoms', 'for', 'Monte', 'Carlo', 'calculations', 'and', 'data', 'analysis', 'for', 'in', 'vivo', 'measurements', 'of', 'radionuclides', 'deposited', 'in', 'tissues', '.', 'The', 'individual', 'properties', 'of', 'each', 'worker', 'can', 'be', 'acquired', 'for', 'a', 'rather', 'precise', 'geometric', 'representation', 'of', 'his', '-LRB-', 'her', '-RRB-', 'anatomy', ',', 'which', 'is', 'particularly', 'important', 'for', 'low', 'energy', 'gamma', 'ray', 'emitting', 'sources', 'such', 'as', 'thorium', ',', 'uranium', ',', 'plutonium', 'and', 'other', 'actinides', '.', 'The', 'software', 'enables', 'automatic', 'creation', 'of', 'an', 'MCNP', 'input', 'data', 'file', 'based', 'on', 'scanning', 'data', '.', 'The', 'utility', 'includes', 'segmentation', 'of', 'images', 'obtained', 'with', 'either', 'computed', 'tomography', 'or', 'magnetic', 'resonance', 'imaging', 'by', 'distinguishing', 'tissues', 'according', 'to', 'their', 'signal', '-LRB-', 'brightness', '-RRB-', 'and', 'specification', 'of', 'the', 'source', 'and', 'detector', '.', 'In', 'addition', ',', 'a', 'coupling', 'of', 'individual', 'voxels', 'within', 'the', 'tissue', 'is', 'used', 'to', 'reduce', 'the', 'memory', 'demand', 'and', 'to', 'increase', 'the', 'calculational', 'speed', '.', 'The', 'utility', 'was', 'tested', 'for', 'low', 'energy', 'emitters', 'in', 'plastic', 'and', 'biological', 'tissues', 'as', 'well', 'as', 'for', 'computed', 'tomography', 'and', 'magnetic', 'resonance', 'imaging', 'scanning', 'information']
143
- Document BIO Tags: ['O', 'O', 'B', 'I', 'I', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'B', 'I', 'I', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'B', 'I', 'I', 'O', 'O', 'O', 'O', 'B', 'I', 'I', 'O', 'B', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'I', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'I', 'I', 'I', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'B', 'O', 'B', 'I', 'O', 'O', 'B', 'I', 'I', 'I', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'B', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B', 'O', 'O', 'B', 'O', 'O', 'O', 'O', 'B', 'I', 'O', 'B', 'I', 'I', 'I', 'I']
144
- Extractive/present Keyphrases: ['computational phantoms', 'monte carlo calculations', 'in vivo measurements', 'radionuclides', 'tissues', 'worker', 'precise geometric representation', 'mcnp input data file', 'scanning data', 'computed tomography', 'brightness', 'graphical user interface', 'computation phantoms', 'calibration', 'in vivo measurement systems', 'signal', 'detector', 'individual voxels', 'memory demand', 'calculational speed', 'plastic', 'magnetic resonance imaging scanning information', 'anatomy', 'low energy gamma ray emitting sources', 'actinides', 'software', 'automatic creation']
145
- Abstractive/absent Keyphrases: ['th', 'u', 'pu', 'biological tissues']
146
-
147
- -----------
148
-
149
- ```
150
-
151
- ### Keyphrase Extraction
152
- ```python
153
- from datasets import load_dataset
154
-
155
- # get the dataset only for keyphrase extraction
156
- dataset = load_dataset("midas/inspec", "extraction")
157
-
158
- print("Samples for Keyphrase Extraction")
159
-
160
- # sample from the train split
161
- print("Sample from training data split")
162
- train_sample = dataset["train"][0]
163
- print("Fields in the sample: ", [key for key in train_sample.keys()])
164
- print("Tokenized Document: ", train_sample["document"])
165
- print("Document BIO Tags: ", train_sample["doc_bio_tags"])
166
- print("\n-----------\n")
167
-
168
- # sample from the validation split
169
- print("Sample from validation data split")
170
- validation_sample = dataset["validation"][0]
171
- print("Fields in the sample: ", [key for key in validation_sample.keys()])
172
- print("Tokenized Document: ", validation_sample["document"])
173
- print("Document BIO Tags: ", validation_sample["doc_bio_tags"])
174
- print("\n-----------\n")
175
-
176
- # sample from the test split
177
- print("Sample from test data split")
178
- test_sample = dataset["test"][0]
179
- print("Fields in the sample: ", [key for key in test_sample.keys()])
180
- print("Tokenized Document: ", test_sample["document"])
181
- print("Document BIO Tags: ", test_sample["doc_bio_tags"])
182
- print("\n-----------\n")
183
- ```
184
-
185
- ### Keyphrase Generation
186
- ```python
187
- # get the dataset only for keyphrase generation
188
- dataset = load_dataset("midas/inspec", "generation")
189
-
190
- print("Samples for Keyphrase Generation")
191
-
192
- # sample from the train split
193
- print("Sample from training data split")
194
- train_sample = dataset["train"][0]
195
- print("Fields in the sample: ", [key for key in train_sample.keys()])
196
- print("Tokenized Document: ", train_sample["document"])
197
- print("Extractive/present Keyphrases: ", train_sample["extractive_keyphrases"])
198
- print("Abstractive/absent Keyphrases: ", train_sample["abstractive_keyphrases"])
199
- print("\n-----------\n")
200
-
201
- # sample from the validation split
202
- print("Sample from validation data split")
203
- validation_sample = dataset["validation"][0]
204
- print("Fields in the sample: ", [key for key in validation_sample.keys()])
205
- print("Tokenized Document: ", validation_sample["document"])
206
- print("Extractive/present Keyphrases: ", validation_sample["extractive_keyphrases"])
207
- print("Abstractive/absent Keyphrases: ", validation_sample["abstractive_keyphrases"])
208
- print("\n-----------\n")
209
-
210
- # sample from the test split
211
- print("Sample from test data split")
212
- test_sample = dataset["test"][0]
213
- print("Fields in the sample: ", [key for key in test_sample.keys()])
214
- print("Tokenized Document: ", test_sample["document"])
215
- print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
216
- print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
217
- print("\n-----------\n")
218
- ```
219
-
220
-
221
-
222
- ## Citation Information
223
- Please cite the works below if you use this dataset in your work.
224
-
225
- ```
226
- @inproceedings{hulth2003improved,
227
- title={Improved automatic keyword extraction given more linguistic knowledge},
228
- author={Hulth, Anette},
229
- booktitle={Proceedings of the 2003 conference on Empirical methods in natural language processing},
230
- pages={216--223},
231
- year={2003}
232
- }
233
- ```
234
- and
235
-
236
- ```
237
- @InProceedings{10.1007/978-3-030-45442-5_41,
238
- author="Sahrawat, Dhruva
239
- and Mahata, Debanjan
240
- and Zhang, Haimin
241
- and Kulkarni, Mayank
242
- and Sharma, Agniv
243
- and Gosangi, Rakesh
244
- and Stent, Amanda
245
- and Kumar, Yaman
246
- and Shah, Rajiv Ratn
247
- and Zimmermann, Roger",
248
- editor="Jose, Joemon M.
249
- and Yilmaz, Emine
250
- and Magalh{\~a}es, Jo{\~a}o
251
- and Castells, Pablo
252
- and Ferro, Nicola
253
- and Silva, M{\'a}rio J.
254
- and Martins, Fl{\'a}vio",
255
- title="Keyphrase Extraction as Sequence Labeling Using Contextualized Embeddings",
256
- booktitle="Advances in Information Retrieval",
257
- year="2020",
258
- publisher="Springer International Publishing",
259
- address="Cham",
260
- pages="328--335",
261
- abstract="In this paper, we formulate keyphrase extraction from scholarly articles as a sequence labeling task solved using a BiLSTM-CRF, where the words in the input text are represented using deep contextualized embeddings. We evaluate the proposed architecture using both contextualized and fixed word embedding models on three different benchmark datasets, and compare with existing popular unsupervised and supervised techniques. Our results quantify the benefits of: (a) using contextualized embeddings over fixed word embeddings; (b) using a BiLSTM-CRF architecture with contextualized word embeddings over fine-tuning the contextualized embedding model directly; and (c) using domain-specific contextualized embeddings (SciBERT). Through error analysis, we also provide some insights into why particular models work better than the others. Lastly, we present a case study where we analyze different self-attention layers of the two best models (BERT and SciBERT) to better understand their predictions.",
262
- isbn="978-3-030-45442-5"
263
- }
264
- ```
265
-
266
- and
267
-
268
- ```
269
- @article{kulkarni2021learning,
270
- title={Learning Rich Representation of Keyphrases from Text},
271
- author={Kulkarni, Mayank and Mahata, Debanjan and Arora, Ravneet and Bhowmik, Rajarshi},
272
- journal={arXiv preprint arXiv:2112.08547},
273
- year={2021}
274
- }
275
-
276
- ```
277
-
278
- ## Contributions
279
- Thanks to [@debanjanbhucs](https://github.com/debanjanbhucs), [@dibyaaaaax](https://github.com/dibyaaaaax), [@UmaGunturi](https://github.com/UmaGunturi) and [@ad6398](https://github.com/ad6398) for adding this dataset
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
extraction/inspec-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f2709eeb92f809cfe227fa2bfdea96dfaa6acbdabdaf00ff4c63bcc236cfe7aa
3
+ size 205992
extraction/inspec-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ffe88b61ea349dd1b2731adcd3817877bcfac4ec2ed680b74d103b31f17f6a0d
3
+ size 390386
extraction/inspec-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:502da0cc8adc43a0805ffa06b369f43699bad57485ce727bce5d0809d2c266d2
3
+ size 200698
generation/inspec-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04fe1577401e962127680ae43b4f3a0badb758862e453f6a1182dad157e4192c
3
+ size 270009
generation/inspec-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:096b6ab37c211e31ca8548eaaf22cacc659c21aebcd74994429f890014226c0e
3
+ size 514745
generation/inspec-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0ee915662e13dbbbb925384e93a52536788e3b382fcb5b670674994b867bfdd
3
+ size 259681
inspec.py DELETED
@@ -1,155 +0,0 @@
1
- import json
2
- import datasets
3
-
4
- # _SPLIT = ['train', 'test', 'valid']
5
- _CITATION = """\
6
- @inproceedings{hulth2003improved,
7
- title={Improved automatic keyword extraction given more linguistic knowledge},
8
- author={Hulth, Anette},
9
- booktitle={Proceedings of the 2003 conference on Empirical methods in natural language processing},
10
- pages={216--223},
11
- year={2003}
12
- }
13
- """
14
-
15
- _DESCRIPTION = """\
16
- Benchmark dataset for automatic identification of keyphrases from text published with the work - Improved automatic keyword extraction given more linguistic knowledge. Anette Hulth. In Proceedings of EMNLP 2003. p. 216-223.
17
- """
18
-
19
- _HOMEPAGE = "https://aclanthology.org/W03-1028.pdf"
20
-
21
- # The license information was obtained from https://github.com/boudinfl/ake-datasets as the dataset shared over here is taken from here
22
- _LICENSE = "Apache 2.0 License"
23
-
24
- # TODO: Add link to the official dataset URLs here
25
-
26
- _URLS = {
27
- "test": "test.jsonl",
28
- "train": "train.jsonl",
29
- "valid": "valid.jsonl"
30
- }
31
-
32
-
33
- # TODO: Name of the dataset usually match the script name with CamelCase instead of snake_case
34
- class Inspec(datasets.GeneratorBasedBuilder):
35
- """TODO: Short description of my dataset."""
36
-
37
- VERSION = datasets.Version("0.0.1")
38
-
39
- BUILDER_CONFIGS = [
40
- datasets.BuilderConfig(name="extraction", version=VERSION,
41
- description="This part of my dataset covers extraction"),
42
- datasets.BuilderConfig(name="generation", version=VERSION,
43
- description="This part of my dataset covers generation"),
44
- datasets.BuilderConfig(name="raw", version=VERSION, description="This part of my dataset covers the raw data"),
45
- ]
46
-
47
- DEFAULT_CONFIG_NAME = "extraction"
48
-
49
- def _info(self):
50
- if self.config.name == "extraction": # This is the name of the configuration selected in BUILDER_CONFIGS above
51
- features = datasets.Features(
52
- {
53
- "id": datasets.Value("int64"),
54
- "document": datasets.features.Sequence(datasets.Value("string")),
55
- "doc_bio_tags": datasets.features.Sequence(datasets.Value("string"))
56
-
57
- }
58
- )
59
- elif self.config.name == "generation":
60
- features = datasets.Features(
61
- {
62
- "id": datasets.Value("int64"),
63
- "document": datasets.features.Sequence(datasets.Value("string")),
64
- "extractive_keyphrases": datasets.features.Sequence(datasets.Value("string")),
65
- "abstractive_keyphrases": datasets.features.Sequence(datasets.Value("string"))
66
-
67
- }
68
- )
69
- else:
70
- features = datasets.Features(
71
- {
72
- "id": datasets.Value("int64"),
73
- "document": datasets.features.Sequence(datasets.Value("string")),
74
- "doc_bio_tags": datasets.features.Sequence(datasets.Value("string")),
75
- "extractive_keyphrases": datasets.features.Sequence(datasets.Value("string")),
76
- "abstractive_keyphrases": datasets.features.Sequence(datasets.Value("string")),
77
- "other_metadata": datasets.features.Sequence(
78
- {
79
- "text": datasets.features.Sequence(datasets.Value("string")),
80
- "bio_tags": datasets.features.Sequence(datasets.Value("string"))
81
- }
82
- )
83
-
84
- }
85
- )
86
- return datasets.DatasetInfo(
87
- # This is the description that will appear on the datasets page.
88
- description=_DESCRIPTION,
89
- # This defines the different columns of the dataset and their types
90
- features=features,
91
- homepage=_HOMEPAGE,
92
- # License for the dataset if available
93
- license=_LICENSE,
94
- # Citation for the dataset
95
- citation=_CITATION,
96
- )
97
-
98
- def _split_generators(self, dl_manager):
99
-
100
- data_dir = dl_manager.download_and_extract(_URLS)
101
- return [
102
- datasets.SplitGenerator(
103
- name=datasets.Split.TRAIN,
104
- # These kwargs will be passed to _generate_examples
105
- gen_kwargs={
106
- "filepath": data_dir['train'],
107
- "split": "train",
108
- },
109
- ),
110
- datasets.SplitGenerator(
111
- name=datasets.Split.TEST,
112
- # These kwargs will be passed to _generate_examples
113
- gen_kwargs={
114
- "filepath": data_dir['test'],
115
- "split": "test"
116
- },
117
- ),
118
- datasets.SplitGenerator(
119
- name=datasets.Split.VALIDATION,
120
- # These kwargs will be passed to _generate_examples
121
- gen_kwargs={
122
- "filepath": data_dir['valid'],
123
- "split": "valid",
124
- },
125
- ),
126
- ]
127
-
128
- # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
129
- def _generate_examples(self, filepath, split):
130
- with open(filepath, encoding="utf-8") as f:
131
- for key, row in enumerate(f):
132
- data = json.loads(row)
133
- if self.config.name == "extraction":
134
- # Yields examples as (key, example) tuples
135
- yield key, {
136
- "id": data['paper_id'],
137
- "document": data["document"],
138
- "doc_bio_tags": data.get("doc_bio_tags")
139
- }
140
- elif self.config.name == "generation":
141
- yield key, {
142
- "id": data['paper_id'],
143
- "document": data["document"],
144
- "extractive_keyphrases": data.get("extractive_keyphrases"),
145
- "abstractive_keyphrases": data.get("abstractive_keyphrases")
146
- }
147
- else:
148
- yield key, {
149
- "id": data['paper_id'],
150
- "document": data["document"],
151
- "doc_bio_tags": data.get("doc_bio_tags"),
152
- "extractive_keyphrases": data.get("extractive_keyphrases"),
153
- "abstractive_keyphrases": data.get("abstractive_keyphrases"),
154
- "other_metadata": data["other_metadata"]
155
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
inspec_performance.png DELETED
Binary file (17 kB)
 
raw/inspec-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab2d21c13463892c549bbbacfbe3513d9406b983c89d2c9b64ba93f272f574f9
3
+ size 282978
raw/inspec-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee3b888460b65cbbf22bd2285485a7dde504f56baef5e36e588b4e422329420a
3
+ size 537923
raw/inspec-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74745a214060dd378b45def4c5e34792ecad113a20a84a1b11e8066ae3f83e8b
3
+ size 272062
test.jsonl DELETED
The diff for this file is too large to render. See raw diff
 
train.jsonl DELETED
The diff for this file is too large to render. See raw diff
 
valid.jsonl DELETED
The diff for this file is too large to render. See raw diff