astromis commited on
Commit
1385f09
·
verified ·
1 Parent(s): 4a0d59e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +93 -24
README.md CHANGED
@@ -1,24 +1,93 @@
1
- ---
2
- license: cc-by-sa-4.0
3
- dataset_info:
4
- features:
5
- - name: header
6
- dtype: string
7
- - name: abstract
8
- dtype: string
9
- - name: keys
10
- dtype: string
11
- - name: text
12
- sequence: string
13
- splits:
14
- - name: train
15
- num_bytes: 11239555
16
- num_examples: 1160
17
- download_size: 5133568
18
- dataset_size: 11239555
19
- configs:
20
- - config_name: default
21
- data_files:
22
- - split: train
23
- path: data/train-*
24
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-4.0
3
+ dataset_info:
4
+ features:
5
+ - name: header
6
+ dtype: string
7
+ - name: abstract
8
+ dtype: string
9
+ - name: keys
10
+ dtype: string
11
+ - name: text
12
+ sequence: string
13
+ splits:
14
+ - name: train
15
+ num_bytes: 11239555
16
+ num_examples: 1160
17
+ download_size: 5133568
18
+ dataset_size: 11239555
19
+ configs:
20
+ - config_name: default
21
+ data_files:
22
+ - split: train
23
+ path: data/train-*
24
+ language:
25
+ - ru
26
+ tags:
27
+ - nlp
28
+ - segmentation
29
+ size_categories:
30
+ - 1K<n<10K
31
+ ---
32
+
33
+ # Dataset Card for Dataset Name
34
+
35
+ Russian science corpus with paragraph annotation and more
36
+
37
+ This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
38
+
39
+ ## Dataset Details
40
+
41
+ ### Dataset Description
42
+
43
+ The small Russian corpus was constructed from scientific papers written by high school and university entry course students. Be aware that texts are not cleaned of redundant punctuation and noise. Some tasks that this corpus might be suitable for:
44
+
45
+ Some dataset statistics:
46
+
47
+ * 925165 of tokens in total
48
+ * 11.5 Mb of text
49
+ * 12659 paragraphs with mean sentenc count is 3.2 and std is 2.5
50
+ * 1160 papers with mean token length is 798 and std is 532
51
+
52
+ - **Language(s) (NLP):** Russian
53
+ - **License:** CC-BY-SA-4.0
54
+
55
+ ### Dataset Sources
56
+
57
+ <!-- Provide the basic links for the dataset. -->
58
+
59
+ - **Repository:** https://github.com/Astromis/Small-Student-Science-Corpus
60
+
61
+ ## Uses
62
+
63
+ <!-- Address questions around how the dataset is intended to be used. -->
64
+
65
+ ### Direct Use
66
+
67
+ * Paragraph segmentation
68
+ * Keyword extraction
69
+ * Title generation
70
+ * Summurization
71
+ * Static word vector construction
72
+
73
+ ## Dataset Structure
74
+
75
+ <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
76
+
77
+ Each data example has four fields:
78
+ * header - header of the paper
79
+ * abstract
80
+ * keys - keywords
81
+ * text - a list of texts in which every element is a paragraph
82
+
83
+ ## Dataset Creation
84
+
85
+ ### Curation Rationale
86
+
87
+ <!-- Motivation for the creation of this dataset. -->
88
+
89
+ The main objective is to create a small dataset that allows to test an automatic text segmentation method.
90
+
91
+ ### Annotations
92
+
93
+ As the paper texts come from the PDF, the annotators carefully reviewed each plain text to be sure that its paragraphs go along with the source PDF text version.