Bertug1911 commited on
Commit
ff929bf
·
verified ·
1 Parent(s): 112993e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +96 -3
README.md CHANGED
@@ -1,3 +1,96 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ size_categories:
4
+ - 1M<n<10M
5
+ task_categories:
6
+ - text-generation
7
+ language:
8
+ - en
9
+ ---
10
+ # Dataset Card for Dataset Name
11
+
12
+ ***OpenBrt-2m***
13
+
14
+ ## Dataset Details
15
+
16
+ ### Dataset Description
17
+
18
+ * This dataset;
19
+ * Contains 3 files:
20
+
21
+ * train.csv = Not filtered file, contains: 2.4m tokens, avarage qscore (quality score) = ~0.5
22
+ * train_filtered.csv = Filtered with **quality score** (Only contains: qscore > 0.7 random articles from Wikipedia), avarage qscore (quality score) = ~1.058
23
+ * train_ultra_filtered.csv = Filtered with high **quality score** (Only contains: qscore > 1.0 random articles from Wikipedia.), avarage qscore (quality score) = ~1.16
24
+
25
+
26
+ - **Curated by:** Bertug Gunel (Bertuğ Günel)
27
+ - **Funded by [optional]:** Nobody
28
+ - **Shared by [optional]:** Nobody
29
+ - **Language(s) (NLP):** EN (en, En, English)
30
+ - **License:** MIT (Mit, mit)
31
+
32
+ ### Dataset Sources [optional]
33
+
34
+ <!-- Provide the basic links for the dataset. -->
35
+
36
+ - **Repository:** Cooming soon!
37
+ - **Paper [optional]:** Cooming soon!
38
+ - **Demo [optional]:** Cooming soon!
39
+
40
+ ## Uses
41
+
42
+ <!-- Address questions around how the dataset is intended to be used. -->
43
+
44
+ ### Direct Use
45
+
46
+ ***You can install ".csv" files and use it!
47
+
48
+ ### Out-of-Scope Use
49
+
50
+ Model is not good for "Daily Dialogues"
51
+
52
+ ## Dataset Structure
53
+
54
+ Only train split, 5mb (biggest file in the split)
55
+ ## Dataset Creation
56
+
57
+ ### Curation Rationale
58
+
59
+ * We trained a model but datasets quality score is about: 0.4 and not good at STEM (Math, science, history, code, chemist, ceography etc.)
60
+ * Then we want to create our dataset with Wikipedia!
61
+
62
+ ### Source Data
63
+
64
+ Randomly selected Wikipedia articles. (230x, about 2.4 million token in English)
65
+
66
+ #### Data Collection and Processing
67
+ - We collected,
68
+ - We filtered "x" to 'x'
69
+ - We filtered bad punctuation (except punctuation marks like {[]}\|-!'^+%&/()=?*_.,;:<>"")
70
+ - We filtered low quality tokens/sentences/articles
71
+
72
+ #### Who are the source data producers?
73
+
74
+ - Source is: Wikipedia (c.c. licence)
75
+ - Producers: All the writers and users of Wikipedia
76
+
77
+
78
+ #### Personal and Sensitive Information
79
+
80
+ - Dataset is NOT CONTAIN any SPECIAL or SENSETIVE data.
81
+
82
+ - AT YOUR OWN RISK
83
+
84
+ ## Bias, Risks, and Limitations
85
+
86
+ - Dataset may contain data ***NOT SUITABLE FOR ALL AUDIENCES***, such as ***political***, ***sexual***, ***18+***, ***gambling***, ***betting***, ***drugs***, ***violence***, ***horror***, ***blood***, **descriptions of ***illegal*** activities!**
87
+
88
+ - ***USE AT YOUR OWN RISK***
89
+
90
+
91
+ ## Dataset Card Contact
92
+
93
+ * Bertug Gunel,
94
+ * Turkey,
95
+ * Eskisehir,
96
+ * bertugscpmail@gmail.com or bertug2099@gmail.com for contact, you can contact FREELY!