File size: 997 Bytes
9ae7257
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72f5c2f
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
license: mit
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
dataset_info:
  features:
  - name: text
    dtype: string
  - name: hits
    dtype: int64
  splits:
  - name: train
    num_bytes: 4041398525
    num_examples: 219846
  download_size: 2293791651
  dataset_size: 4041398525
task_categories:
- feature-extraction
language:
- en
pretty_name: 'wikipedia industrial technical '
size_categories:
- 100K<n<1M
---



This dataset was generated by filtering a subset from the wikipedia dataset -> "wikimedia/wikipedia" -> " 20231101.en"

Detailed information on how this was accomplished is given in this notebook. https://github.com/Umar-Azam/embedding_finetuner_wiki/tree/main

Short Explanation : We have a list of keywords to check against. Each wikipedia text is tokenized into word sets and the "hits" value contains the number of our filter keywords present in each text. Only the items with >4 matches are then filtered to generate this dataset.