|
|
--- |
|
|
configs: |
|
|
- config_name: ar |
|
|
data_files: |
|
|
- path: |
|
|
- ar.jsonl.zst |
|
|
split: train |
|
|
- config_name: assorted |
|
|
data_files: |
|
|
- path: |
|
|
- assorted.jsonl.zst |
|
|
split: train |
|
|
- config_name: de |
|
|
data_files: |
|
|
- path: |
|
|
- de.jsonl.zst |
|
|
split: train |
|
|
- config_name: en |
|
|
data_files: |
|
|
- path: |
|
|
- en.jsonl.zst |
|
|
split: train |
|
|
default: true |
|
|
- config_name: es |
|
|
data_files: |
|
|
- path: |
|
|
- es.jsonl.zst |
|
|
split: train |
|
|
- config_name: fa |
|
|
data_files: |
|
|
- path: |
|
|
- fa.jsonl.zst |
|
|
split: train |
|
|
- config_name: fr |
|
|
data_files: |
|
|
- path: |
|
|
- fr.jsonl.zst |
|
|
split: train |
|
|
- config_name: it |
|
|
data_files: |
|
|
- path: |
|
|
- it.jsonl.zst |
|
|
split: train |
|
|
- config_name: ja |
|
|
data_files: |
|
|
- path: |
|
|
- ja.jsonl.zst |
|
|
split: train |
|
|
- config_name: nl |
|
|
data_files: |
|
|
- path: |
|
|
- nl.jsonl.zst |
|
|
split: train |
|
|
- config_name: pl |
|
|
data_files: |
|
|
- path: |
|
|
- pl.jsonl.zst |
|
|
split: train |
|
|
- config_name: pt |
|
|
data_files: |
|
|
- path: |
|
|
- pt.jsonl.zst |
|
|
split: train |
|
|
- config_name: ru |
|
|
data_files: |
|
|
- path: |
|
|
- ru.jsonl.zst |
|
|
split: train |
|
|
- config_name: sv |
|
|
data_files: |
|
|
- path: |
|
|
- sv.jsonl.zst |
|
|
split: train |
|
|
- config_name: uk |
|
|
data_files: |
|
|
- path: |
|
|
- uk.jsonl.zst |
|
|
split: train |
|
|
- config_name: vi |
|
|
data_files: |
|
|
- path: |
|
|
- vi.jsonl.zst |
|
|
split: train |
|
|
- config_name: zh |
|
|
data_files: |
|
|
- path: |
|
|
- zh.jsonl.zst |
|
|
split: train |
|
|
license: cc-by-sa-4.0 |
|
|
language: |
|
|
- multilingual |
|
|
- ar |
|
|
- de |
|
|
- en |
|
|
- es |
|
|
- fa |
|
|
- fr |
|
|
- it |
|
|
- ja |
|
|
- nl |
|
|
- pl |
|
|
- pt |
|
|
- ru |
|
|
- sv |
|
|
- uk |
|
|
- vi |
|
|
- zh |
|
|
task_categories: |
|
|
- text-generation |
|
|
tags: |
|
|
- huggingface |
|
|
- wikipedia |
|
|
- finewiki |
|
|
- sample |
|
|
--- |
|
|
|
|
|
# HuggingFaceFW/finewiki sample |
|
|
|
|
|
A uniformly randomized subset of [HuggingFaceFW/finewiki](https://huggingface.co/datasets/HuggingFaceFW/finewiki), created to provide a smaller and more manageable dataset for analysis, fine-tuning, and benchmarking. |
|
|
|
|
|
## Overview |
|
|
This sample includes Wikipedia articles from languages with more than one million pages. Sampling is performed uniformly at random instead of alphabetically to ensure unbiased representation. |
|
|
|
|
|
## Language Inclusion Criteria |
|
|
Languages were selected based on page count and content quality. The dataset excludes: |
|
|
- Cebuano and Waray, due to page quality concerns |
|
|
- Egyptian Arabic, as standard Arabic is already included |
|
|
|
|
|
For each selected language, 1 percent of the total articles were randomly chosen and shuffled. |
|
|
|
|
|
## Dataset Composition |
|
|
From each language configuration, 1 000 pages were randomly selected to form an **assorted** configuration that aggregates samples from all languages. |
|
|
|
|
|
## Available Configurations |
|
|
`ar, de, en, es, fa, fr, it, ja, nl, pl, pt, ru, sv, uk, vi, zh, assorted` |