zhwiki-latest / README.md
Jax-dan's picture
Update README.md
b33a22e verified
metadata
license: apache-2.0
language:
  - zh
task_categories:
  - fill-mask
size_categories:
  - 1M<n<10M

This repository demonstrates access to the latest Chinese Wikipedia corpora.

Download

You can download the latest Chinese Wikipedia dump from the following link:

Extraction

After you download the dump, you can extract the data using the following commands:

# install wikiextractor
pip install wikiextractor

# extract the data
wikiextractor --json -o <output_dir> zhwiki-latest-pages-articles.xml.bz2

Then, you can find the extracted data in (such as) <output_dir>/AA/wiki_00

Convertion

You can convert the extracted data into a format that is more suitable for your needs using the following command:

# opencc
pip install pip install opencc-python

python wiki_parser.py --data_folder <input_folder> --format json

And the below is a demo of wiki_parser.py:

import json
import os
from os import path
from opencc import OpenCC

import argparse

from tqdm import tqdm
import pandas as pd
import shutil

# 繁体转简体(支持多种模式:t2s, s2twp等)
converter = OpenCC('t2s')


def wiki_processing():
    error = 0
    dataset = []
    for _dir in tqdm(os.listdir(args.data_folder)):
        file_path = path.join(args.data_folder, _dir)
        for i, _file in enumerate(os.listdir(file_path)):
            try:
                texts = [json.loads(line) for line in open(path.join(file_path, _file), 'r').readlines()]
                for text in texts:
                    text['text'] = converter.convert(text['text'])
                    text['title'] = converter.convert(text['title'])
                dataset.extend(texts)
            except UnicodeDecodeError as e:
                print(e)
                error += 1
                continue

        if args.remove_input:
            # 删除文件夹及其内容
            shutil.rmtree(file_path)

    # 写入新文件
    if args.format == 'json':
        with open(path.join(args.output_dir, 'wiki_zh_latest.json'), 'w', encoding='utf-8') as f:
            for text in dataset:
                f.write(json.dumps(text, ensure_ascii=False) + '\n')
    elif args.format == 'jsonl':
        with open(path.join(args.output_dir, 'wiki_zh_latest.jsonl'), 'w', encoding='utf-8') as f:
            for text in dataset:
                f.write(json.dumps(text, ensure_ascii=False) + '\n')
    elif args.format == 'parquet':
        # 将数据转换为DataFrame
        df = pd.DataFrame(dataset)
        output_file = path.join(args.output_dir, 'wiki_zh_latest.parquet')
        df.to_parquet(output_file, engine='pyarrow')
    elif args.format == 'txt':
        with open(path.join(args.output_dir, 'wiki_zh_latest.txt'), 'w', encoding='utf-8') as f:
            for text in dataset:
                f.write(text['text'] + '\n')
    else:
        raise ValueError('Invalid format')

    print('Finished!')
    print(f'total: {len(dataset)}, error: {error}')


if __name__ == '__main__':
    args = argparse.ArgumentParser()
    args.add_argument('--data_folder', type=str, help='the save folder of extracted wiki data')
    args.add_argument('--output_dir', type=str, default='./', help='output folder')
    args.add_argument('--format', type=str, choices=['json', 'jsonl', 'parquet', 'txt'], default='json', help='output format')
    args.add_argument('--remove_input', action='store_true', help='remove input files')
    args = args.parse_args()

    # 处理
    wiki_processing()