Datasets:

Modalities:
Text
Libraries:
Datasets
dmahata commited on
Commit
d6b168f
·
1 Parent(s): e7bb6a7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -10
README.md CHANGED
@@ -15,15 +15,7 @@ Original source of the data - [https://github.com/ygorg/KPTimes](https://github.
15
 
16
  KPTimes is a large scale dataset comprising of 279,923 news articles from New York Times and JP Times. It is one of the datasets which has annotations of keyphrases curated by the editors who can be considered as experts. The authors developed this dataset in order to have a large dataset for training neural models for keyphrase generation in a domain other than the scientific domain, and to understand the differences between keyphrases annotated by experts and non-experts. The authors show that the editors tend to assign generic keyphrases that are not present in the actual news article's text, with 55% of them being abstractive keyphrases. The keyphrases in the news domain as presented in this work were also on an average shorter (1.4 words) than those in the scientific datasets (2.4 words).
17
 
18
- The dataset is randomly divided into train (92.8%), validation (3.6%) and test (3.6%) splits. In order to enable the models trained on this dataset to generalize well the authors did not want to have the entire data taken from a single source (NY Times), and therefore added 10K more articles from JPTimes dataset.
19
-
20
-
21
- large scale dataset in the domain of news comprising of 279,923 news articles - supports the training of neural models.
22
- includes expert annotations - editor curated keyphrases
23
- how annotations differ from those found in existing datasets
24
- author assigned keyphrases are not consistent
25
- heuristics was applied to identify the content - title, headline and body
26
- Source of articles - New York Times
27
 
28
  ## Dataset Structure
29
 
@@ -212,4 +204,4 @@ print("\n-----------\n")
212
  ```
213
 
214
  ## Contributions
215
- Thanks to [@debanjanbhucs](https://github.com/debanjanbhucs), [@dibyaaaaax](https://github.com/dibyaaaaax) and [@ad6398](https://github.com/ad6398) for adding this dataset
 
15
 
16
  KPTimes is a large scale dataset comprising of 279,923 news articles from New York Times and JP Times. It is one of the datasets which has annotations of keyphrases curated by the editors who can be considered as experts. The authors developed this dataset in order to have a large dataset for training neural models for keyphrase generation in a domain other than the scientific domain, and to understand the differences between keyphrases annotated by experts and non-experts. The authors show that the editors tend to assign generic keyphrases that are not present in the actual news article's text, with 55% of them being abstractive keyphrases. The keyphrases in the news domain as presented in this work were also on an average shorter (1.4 words) than those in the scientific datasets (2.4 words).
17
 
18
+ The dataset is randomly divided into train (92.8%), validation (3.6%) and test (3.6%) splits. In order to enable the models trained on this dataset to generalize well the authors did not want to have the entire data taken from a single source (NY Times), and therefore added 10K more articles from JPTimes dataset. The authors collected free to read article URLs from NY Times spanning from 2006 to 2017, and obtained their corresponding HTML pages from the Internet Archive. They cleaned the HTML tags and extracted the title, and the main content of the articles using heuristics. The gold keyphrases were obtained from the metadata fields - *news_keywords* and *keywords*.
 
 
 
 
 
 
 
 
19
 
20
  ## Dataset Structure
21
 
 
204
  ```
205
 
206
  ## Contributions
207
+ Thanks to [@debanjanbhucs](https://github.com/debanjanbhucs), [@dibyaaaaax](https://github.com/dibyaaaaax), [@UmaGunturi](https://github.com/UmaGunturi) and [@ad6398](https://github.com/ad6398) for adding this dataset