|
|
--- |
|
|
dataset_info: |
|
|
features: |
|
|
- name: _id |
|
|
dtype: string |
|
|
- name: character_id |
|
|
dtype: string |
|
|
- name: article_en |
|
|
dtype: string |
|
|
- name: article_vi |
|
|
dtype: string |
|
|
- name: character_url |
|
|
dtype: string |
|
|
- name: code |
|
|
dtype: string |
|
|
- name: description |
|
|
dtype: string |
|
|
- name: language |
|
|
dtype: string |
|
|
- name: title |
|
|
dtype: string |
|
|
- name: type |
|
|
dtype: string |
|
|
- name: metadata |
|
|
dtype: string |
|
|
- name: summary |
|
|
dtype: string |
|
|
- name: text |
|
|
dtype: string |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 5843304840 |
|
|
num_examples: 515669 |
|
|
download_size: 3190490993 |
|
|
dataset_size: 5843304840 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: train |
|
|
path: data/train-* |
|
|
--- |
|
|
This dataset is used by me using wiki data to get all characters in English and Vietnamese pages in the following list of categories: |
|
|
- Q5 : human |
|
|
- Q95074 : fictional_character |
|
|
- Q15632617 : fictional_human |
|
|
- Q15773347 : television_character |
|
|
- Q1114461 : comic_book_character |
|
|
- Q4271324 : literary_character |
|
|
|
|
|
Then I use wikipedia-api to get the content of each page characters |