metadata
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 41663206615.575096
num_examples: 45009627
download_size: 23630062762
dataset_size: 41663206615.575096
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
Dataset Card for "vietnamese-mlmcorpus"
How can we setup build dataset
Prepare several functions: split puntual, tokenizer, and use HF dataset for optimize processing
def get_tokens(examples):
'''
Tokenizer samples into token_ids
'''
return tokenizer(examples)['input_ids']
def truncation(passage, pattern= '[.\n]'):
'''
Pattern split passage
'''
output = re.split(pattern, passage)
output = [item for item in output if len(item.split()) > 0]
return output
def split_puntual(example, threshold=512):
'''
Split a long documents into spans with ~512 tokens
'''
texts = truncation(example)
tokenized = get_tokens(texts)
tmp, group = [], []
count = 0
for tokens, text in zip(tokenized, texts):
count += len(tokens)
if count <= threshold:
tmp.append(text.strip())
else:
if len(tmp) > 0:
group.append('. '.join(tmp)) # update List[str]
count = len(tokens) # set count at current idx
tmp = [] # reset list
tmp.append(text.strip())
else:
count = 0
group.append('summary')
return group
def process(examples):
'''
Build a stack processing
'''
chunks = []
for x in examples:
chunks += split_puntual(x)
return {'text':chunks}
Now, we run with this code
import re
from datasets import load_dataset
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('google/mt5-small')
if __name__ == '__main__':
dataset = load_dataset("ademax/binhvq-news-corpus")
print("Total original: ", dataset)
dataset = dataset.map(
lambda example : process(example['content']),
num_proc=2, batched=True,
remove_columns=['content', 'title', 'summary', 'category']
)
# filter samples less than 30 words
dataset = dataset.filter(lambda example: len(example['text'].split(' ')) > 30)
print("Processing: ", dataset)
dataset = dataset.train_test_split(test_size=0.0002)
dataset.save_to_disk('release')