UmarAzam commited on
Commit
9ae7257
·
verified ·
1 Parent(s): b496154

Added Reference for dataset generation

Browse files

This dataset was generated by filtering a subset from the wikipedia dataset -> "wikimedia/wikipedia" -> " 20231101.en"

Detailed information on how this was accomplished is given in this notebook. https://github.com/Umar-Azam/embedding_finetuner_wiki/tree/main

Short Explanation : We have a list of keywords to check against. Each wikipedia text is tokenized into word sets and the "hits" value contains the number of our filter keywords present in each text. Only the items with >4 matches are then filtered to generate this dataset.

Files changed (1) hide show
  1. README.md +27 -20
README.md CHANGED
@@ -1,20 +1,27 @@
1
- ---
2
- license: mit
3
- configs:
4
- - config_name: default
5
- data_files:
6
- - split: train
7
- path: data/train-*
8
- dataset_info:
9
- features:
10
- - name: text
11
- dtype: string
12
- - name: hits
13
- dtype: int64
14
- splits:
15
- - name: train
16
- num_bytes: 4041398525
17
- num_examples: 219846
18
- download_size: 2293791651
19
- dataset_size: 4041398525
20
- ---
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ configs:
4
+ - config_name: default
5
+ data_files:
6
+ - split: train
7
+ path: data/train-*
8
+ dataset_info:
9
+ features:
10
+ - name: text
11
+ dtype: string
12
+ - name: hits
13
+ dtype: int64
14
+ splits:
15
+ - name: train
16
+ num_bytes: 4041398525
17
+ num_examples: 219846
18
+ download_size: 2293791651
19
+ dataset_size: 4041398525
20
+ task_categories:
21
+ - feature-extraction
22
+ language:
23
+ - en
24
+ pretty_name: 'wikipedia industrial technical '
25
+ size_categories:
26
+ - 100K<n<1M
27
+ ---