--- license: mit configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: text dtype: string - name: hits dtype: int64 splits: - name: train num_bytes: 4041398525 num_examples: 219846 download_size: 2293791651 dataset_size: 4041398525 task_categories: - feature-extraction language: - en pretty_name: 'wikipedia industrial technical ' size_categories: - 100K "wikimedia/wikipedia" -> " 20231101.en" Detailed information on how this was accomplished is given in this notebook. https://github.com/Umar-Azam/embedding_finetuner_wiki/tree/main Short Explanation : We have a list of keywords to check against. Each wikipedia text is tokenized into word sets and the "hits" value contains the number of our filter keywords present in each text. Only the items with >4 matches are then filtered to generate this dataset.