Datasets:
File size: 2,653 Bytes
151f424 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 |
---
configs:
- config_name: ar
data_files:
- path:
- ar.jsonl.zst
split: train
- config_name: assorted
data_files:
- path:
- assorted.jsonl.zst
split: train
- config_name: de
data_files:
- path:
- de.jsonl.zst
split: train
- config_name: en
data_files:
- path:
- en.jsonl.zst
split: train
default: true
- config_name: es
data_files:
- path:
- es.jsonl.zst
split: train
- config_name: fa
data_files:
- path:
- fa.jsonl.zst
split: train
- config_name: fr
data_files:
- path:
- fr.jsonl.zst
split: train
- config_name: it
data_files:
- path:
- it.jsonl.zst
split: train
- config_name: ja
data_files:
- path:
- ja.jsonl.zst
split: train
- config_name: nl
data_files:
- path:
- nl.jsonl.zst
split: train
- config_name: pl
data_files:
- path:
- pl.jsonl.zst
split: train
- config_name: pt
data_files:
- path:
- pt.jsonl.zst
split: train
- config_name: ru
data_files:
- path:
- ru.jsonl.zst
split: train
- config_name: sv
data_files:
- path:
- sv.jsonl.zst
split: train
- config_name: uk
data_files:
- path:
- uk.jsonl.zst
split: train
- config_name: vi
data_files:
- path:
- vi.jsonl.zst
split: train
- config_name: zh
data_files:
- path:
- zh.jsonl.zst
split: train
license: cc-by-sa-4.0
language:
- multilingual
- ar
- de
- en
- es
- fa
- fr
- it
- ja
- nl
- pl
- pt
- ru
- sv
- uk
- vi
- zh
task_categories:
- text-generation
tags:
- huggingface
- wikipedia
- finewiki
- sample
---
# HuggingFaceFW/finewiki sample
A uniformly randomized subset of [HuggingFaceFW/finewiki](https://huggingface.co/datasets/HuggingFaceFW/finewiki), created to provide a smaller and more manageable dataset for analysis, fine-tuning, and benchmarking.
## Overview
This sample includes Wikipedia articles from languages with more than one million pages. Sampling is performed uniformly at random instead of alphabetically to ensure unbiased representation.
## Language Inclusion Criteria
Languages were selected based on page count and content quality. The dataset excludes:
- Cebuano and Waray, due to page quality concerns
- Egyptian Arabic, as standard Arabic is already included
For each selected language, 1 percent of the total articles were randomly chosen and shuffled.
## Dataset Composition
From each language configuration, 1 000 pages were randomly selected to form an **assorted** configuration that aggregates samples from all languages.
## Available Configurations
`ar, de, en, es, fa, fr, it, ja, nl, pl, pt, ru, sv, uk, vi, zh, assorted` |