Datasets:
The dataset viewer should be available soon. Please retry later.
FineWeb-2 NLP
23,235,231 sentences and 371,647,537 word tokens across 996 languages, extracted from 528,708 source documents (789.7 MB source data) in FineWeb-2. Every sentence, paragraph, word frequency, and n-gram frequency, split with language-aware segmentation and continuously updated.
What is this?
FineWeb-2 is HuggingFace's multilingual web text corpus. It contains approximately 5 billion documents totaling 20 TB of text, drawn from roughly 100 Common Crawl snapshots spanning 2013 to 2024, and covering 1,868 language-script pairs. It is the largest curated multilingual web corpus publicly available today.
Working directly with FineWeb-2 is challenging. The raw data is enormous, and common NLP tasks like sentence extraction, word frequency analysis, or n-gram computation require downloading and processing terabytes of parquet files. Most researchers need just one language, or just the sentences, or just the word frequencies. They should not have to process the entire corpus to get there.
FineWeb-2 NLP solves this by pre-segmenting every document in FineWeb-2 into four linguistically useful units:
| Type | Rows | What you get |
|---|---|---|
| sentences | 23,235,231 | One row per sentence, with source document ID, URL, and position index |
| paragraphs | 549,363 | One row per paragraph, with sentence count per paragraph |
| words | 20,542,963 | Per-shard word frequency and document frequency tables |
| ngrams | 475,285,790 | Per-shard bigram through 5-gram frequency tables |
Every row traces back to its source document through doc_id and doc_url fields, making
it possible to navigate from any sentence or word back to the original web page. This
traceability is important for research that needs to verify context, check for
contamination, or build training sets with known provenance.
Why per-shard frequency tables?
Words and n-grams are computed per source shard rather than aggregated into a single global table for each language. This design choice is intentional: some languages in FineWeb-2 contain over 700 million documents, and building a single frequency table for that volume would require holding hundreds of millions of unique entries in memory simultaneously. By keeping frequencies per-shard, each output file stays small and self-contained.
Aggregation is straightforward. A single DuckDB query can combine all shards for a language in seconds:
-- Language-level word frequencies in one query
SELECT word, sum(frequency) as total_freq, sum(doc_frequency) as total_doc_freq
FROM 'hf://datasets/open-index/fineweb-2-nlp/data/words/lat_Latn/*.parquet'
GROUP BY word ORDER BY total_freq DESC LIMIT 100;
What is being released?
Four dataset configs, all stored as Zstandard-compressed Parquet files:
1. Sentences (config_name: sentences)
| Column | Type | Description |
|---|---|---|
sentence |
string | The extracted sentence |
doc_id |
string | Source document UUID from FineWeb-2 |
doc_url |
string | Original web page URL |
position |
int32 | 0-based sentence index within the document |
language |
string | ISO 639-3 language code (e.g. lat, vie, cmn) |
language_script |
string | ISO 15924 script (e.g. Latn, Hani, Cyrl) |
2. Paragraphs (config_name: paragraphs)
| Column | Type | Description |
|---|---|---|
paragraph |
string | The paragraph text |
doc_id |
string | Source document UUID |
doc_url |
string | Original web page URL |
position |
int32 | 0-based paragraph index within the document |
language |
string | ISO 639-3 code |
language_script |
string | ISO 15924 script |
sentence_count |
int32 | Number of sentences detected in this paragraph |
3. Words (config_name: words)
| Column | Type | Description |
|---|---|---|
word |
string | Lowercased, NFC-normalized word |
frequency |
int64 | Occurrence count within this shard |
doc_frequency |
int64 | Documents containing this word (within shard) |
language |
string | ISO 639-3 code |
language_script |
string | ISO 15924 script |
4. N-grams (config_name: ngrams)
| Column | Type | Description |
|---|---|---|
ngram |
string | Space-joined n-gram (e.g. "of the", "in the world") |
n |
int32 | N-gram size: 2 (bigram), 3 (trigram), 4, or 5 |
frequency |
int64 | Occurrence count within this shard |
language |
string | ISO 639-3 code |
language_script |
string | ISO 15924 script |
Data organization
open-index/fineweb-2-nlp/
βββ README.md
βββ stats.csv
βββ data/
βββ sentences/
β βββ lat_Latn/
β β βββ 0000.parquet
β βββ vie_Latn/
β β βββ 0000.parquet
β β βββ ...
β βββ {lang_script}/
β βββ {shard:04d}.parquet
βββ paragraphs/
β βββ {lang_script}/{shard:04d}.parquet
βββ words/
β βββ {lang_script}/{shard:04d}.parquet
βββ ngrams/
βββ {lang_script}/{shard:04d}.parquet
Each source FineWeb-2 shard maps to exactly one output file per type per language.
Shard names are zero-padded four-digit integers (0000, 0001, ...) that match the
source file ordering from HuggingFace.
Sentence distribution by language
non_Latn ββββββββββββββββββββββββββββββββββββββββ 527,948
tuk_Cyrl ββββββββββββββββ 221,123
alt_Cyrl βββββββββββββββ 205,540
qug_Latn ββββββββββββββ 193,962
tcz_Latn βββββββββββ 152,266
gom_Latn βββββββββββ 146,526
nbl_Latn ββββββββββ 140,093
lua_Latn ββββββββββ 138,946
mni_Latn ββββββββββ 132,013
ssw_Latn βββββββββ 127,536
kng_Latn βββββββββ 126,936
mos_Latn βββββββββ 126,123
mnw_Mymr βββββββββ 120,477
pck_Latn βββββββββ 119,867
tiv_Latn ββββββββ 115,586
ron_Cyrl βββββββ 104,553
npi_Latn βββββββ 104,197
mdf_Cyrl βββββββ 103,765
nzi_Latn βββββββ 103,176
pam_Latn βββββββ 98,608
dak_Latn βββββββ 95,906
btx_Latn βββββββ 95,824
iso_Latn βββββββ 95,104
ory_Latn βββββββ 93,598
mar_Latn βββββββ 93,468
dag_Latn ββββββ 87,526
bci_Latn ββββββ 86,266
sgs_Latn ββββββ 84,996
chk_Latn ββββββ 84,764
lzh_Hani ββββββ 84,487
SQL to reproduce this chart
SELECT language || '_' || language_script as lang, count(*) as sentences
FROM 'hf://datasets/open-index/fineweb-2-nlp/data/sentences/**/*.parquet'
GROUP BY lang ORDER BY sentences DESC LIMIT 30;
Paragraph distribution by language
mos_Latn ββββββββββββββββββββββββββββββββββββββββ 3,396
szy_Latn ββββββββββββββββββββββββββββββββββββ 3,107
mdf_Cyrl ββββββββββββββββββββββββββββββββββββ 3,095
non_Latn ββββββββββββββββββββββββββββββββββ 2,970
nah_Latn βββββββββββββββββββββββββββββββββ 2,823
glv_Latn βββββββββββββββββββββββββββββββββ 2,815
tok_Latn βββββββββββββββββββββββββββββββββ 2,807
sgs_Latn ββββββββββββββββββββββββββββββββ 2,753
gcf_Latn βββββββββββββββββββββββββββββββ 2,688
npi_Latn βββββββββββββββββββββββββββββββ 2,687
bjn_Latn ββββββββββββββββββββββββββββββ 2,568
nbl_Latn ββββββββββββββββββββββββββββββ 2,559
aaz_Latn ββββββββββββββββββββββββββββββ 2,559
mnw_Mymr ββββββββββββββββββββββββββββββ 2,553
acd_Latn βββββββββββββββββββββββββββββ 2,546
ach_Latn βββββββββββββββββββββββββββββ 2,546
sms_Latn βββββββββββββββββββββββββββββ 2,520
nzi_Latn βββββββββββββββββββββββββββββ 2,493
tcz_Latn βββββββββββββββββββββββββββββ 2,468
nak_Latn ββββββββββββββββββββββββββββ 2,433
SQL to reproduce this chart
SELECT language || '_' || language_script as lang, count(*) as paragraphs
FROM 'hf://datasets/open-index/fineweb-2-nlp/data/paragraphs/**/*.parquet'
GROUP BY lang ORDER BY paragraphs DESC LIMIT 20;
Splitting quality overview
ade_Latn ββββββββββββββββββββββββββββββββββββββββ 386.5 sent/doc
swg_Latn βββββββββββββββββββββββββββββββ 302.5 sent/doc
tuk_Cyrl βββββββββββββββββββββββββββ 267.7 sent/doc
dak_Latn ββββββββββββββββββββββββ 232.8 sent/doc
non_Latn βββββββββββββββββββββββ 226.5 sent/doc
pkb_Latn ββββββββββββββββββββ 197.3 sent/doc
lem_Latn βββββββββββββββββββ 186.6 sent/doc
wob_Latn βββββββββββββββββ 167.1 sent/doc
guh_Latn βββββββββββββββ 154.4 sent/doc
lzh_Hani βββββββββββββββ 151.4 sent/doc
rmn_Grek βββββββββββββββ 150.7 sent/doc
esk_Latn ββββββββββββββ 144.9 sent/doc
quh_Latn ββββββββββββββ 136.2 sent/doc
txu_Latn ββββββββββββ 120.4 sent/doc
byr_Latn ββββββββββββ 116.9 sent/doc
ian_Latn ββββββββββββ 116.6 sent/doc
yss_Latn βββββββββββ 115.2 sent/doc
cbt_Latn βββββββββββ 113.4 sent/doc
amx_Latn βββββββββββ 111.6 sent/doc
nab_Latn βββββββββββ 111.3 sent/doc
The chart above shows the average number of sentences extracted per source document for each language. This metric serves as a rough proxy for content quality and structural richness. Languages where the average is high tend to contain longer, well-structured articles with clear paragraph and sentence boundaries. Languages with lower averages typically have shorter source documents, or they use scripts and punctuation patterns where automatic sentence boundary detection is more difficult.
How to download and use this dataset
1. DuckDB (recommended for exploration)
DuckDB can query HuggingFace parquet files directly over HTTP without downloading anything to disk. This makes it the fastest way to explore the dataset.
-- Count sentences per language
SELECT language, language_script, count(*) as sentences
FROM 'hf://datasets/open-index/fineweb-2-nlp/data/sentences/**/*.parquet'
GROUP BY ALL ORDER BY sentences DESC;
-- Read Latin sentences
SELECT sentence, doc_url
FROM 'hf://datasets/open-index/fineweb-2-nlp/data/sentences/lat_Latn/*.parquet'
LIMIT 20;
-- Top 100 most frequent words in a language
SELECT word, frequency, doc_frequency
FROM 'hf://datasets/open-index/fineweb-2-nlp/data/words/vie_Latn/*.parquet'
ORDER BY frequency DESC LIMIT 100;
-- Most common bigrams in Latin
SELECT ngram, frequency
FROM 'hf://datasets/open-index/fineweb-2-nlp/data/ngrams/lat_Latn/*.parquet'
WHERE n = 2
ORDER BY frequency DESC LIMIT 50;
-- Average sentences per document per language
SELECT language_script,
count(DISTINCT doc_id) as docs,
count(*) as sentences,
round(count(*) * 1.0 / count(DISTINCT doc_id), 1) as avg_sent_per_doc
FROM 'hf://datasets/open-index/fineweb-2-nlp/data/sentences/**/*.parquet'
GROUP BY language_script ORDER BY sentences DESC LIMIT 20;
-- Aggregate word frequencies across all shards
SELECT word, sum(frequency) as total_freq
FROM 'hf://datasets/open-index/fineweb-2-nlp/data/words/lat_Latn/*.parquet'
GROUP BY word ORDER BY total_freq DESC LIMIT 50;
-- Find sentences containing a specific word
SELECT sentence, doc_url
FROM 'hf://datasets/open-index/fineweb-2-nlp/data/sentences/lat_Latn/*.parquet'
WHERE sentence ILIKE '%roma%'
LIMIT 20;
2. Python (datasets library)
from datasets import load_dataset
# Stream all sentences (no full download needed)
ds = load_dataset("open-index/fineweb-2-nlp", "sentences", split="train", streaming=True)
for row in ds.take(10):
print(f"[{row['language']}] {row['sentence'][:100]}")
# Load paragraphs for a specific language
ds = load_dataset("open-index/fineweb-2-nlp", "paragraphs", split="train", streaming=True)
lat_paras = (row for row in ds if row["language"] == "lat")
# Word frequencies
ds = load_dataset("open-index/fineweb-2-nlp", "words", split="train", streaming=True)
for row in ds.take(20):
print(f"{row['word']:20s} freq={row['frequency']:>8,} doc_freq={row['doc_frequency']:>6,}")
# N-gram analysis
ds = load_dataset("open-index/fineweb-2-nlp", "ngrams", split="train", streaming=True)
bigrams = (row for row in ds if row["n"] == 2)
3. huggingface_hub CLI
# Download all Latin sentences
huggingface-cli download open-index/fineweb-2-nlp --include "data/sentences/lat_Latn/*" --repo-type dataset
# Download Vietnamese words and ngrams
huggingface-cli download open-index/fineweb-2-nlp --include "data/words/vie_Latn/*" "data/ngrams/vie_Latn/*" --repo-type dataset
# Download everything for one language
huggingface-cli download open-index/fineweb-2-nlp --include "data/*/lat_Latn/*" --repo-type dataset
4. pandas + DuckDB
import duckdb
conn = duckdb.connect()
# Latin sentences as DataFrame
df = conn.sql("""
SELECT sentence, doc_url, position
FROM 'hf://datasets/open-index/fineweb-2-nlp/data/sentences/lat_Latn/*.parquet'
LIMIT 1000
""").df()
print(f"Loaded {len(df):,} sentences")
print(df.head(10))
# Word frequency analysis
words_df = conn.sql("""
SELECT word, sum(frequency) as total_freq
FROM 'hf://datasets/open-index/fineweb-2-nlp/data/words/lat_Latn/*.parquet'
GROUP BY word ORDER BY total_freq DESC LIMIT 200
""").df()
print(words_df)
Dataset statistics
| Metric | Value |
|---|---|
| Total sentences | 23,235,231 |
| Total paragraphs | 549,363 |
| Total word tokens | 371,647,537 |
| Unique word entries (per-shard) | 20,542,963 |
| Total n-gram entries (per-shard) | 475,285,790 |
| Languages processed | 996 |
| Source documents | 528,708 |
| Source data processed | 789.7 MB |
| Output parquet size | 6.2 GB |
| Avg sentence length | 104.9 chars |
| Avg paragraph length | 4478.8 chars |
| Avg sentences per document | 43.9 |
| Avg paragraphs per document | 1.0 |
| Avg sentences per paragraph | 42.3 |
Per-language breakdown
| # | Language | Sentences | Paragraphs | Words | Avg Sent | Avg Para | Docs | Shards | Source | Output |
|---|---|---|---|---|---|---|---|---|---|---|
| 1 | non_Latn (non_Latn) |
527,948 | 2,970 | 8,729,010 | 96.7 | 17369.0 | 2,331 | 1 | 21.9 MB | 71.1 MB |
| 2 | Turkmen (tuk_Cyrl) |
221,123 | 1,128 | 2,324,099 | 144.3 | 28472.7 | 826 | 1 | 8.6 MB | 24.9 MB |
| 3 | alt_Cyrl (alt_Cyrl) |
205,540 | 1,967 | 2,231,510 | 148.0 | 15570.0 | 1,881 | 1 | 7.6 MB | 20.9 MB |
| 4 | qug_Latn (qug_Latn) |
193,962 | 2,237 | 2,262,307 | 93.5 | 8190.1 | 2,231 | 1 | 6.6 MB | 20.0 MB |
| 5 | tcz_Latn (tcz_Latn) |
152,266 | 2,468 | 3,431,253 | 130.4 | 8104.5 | 2,459 | 1 | 7.8 MB | 20.9 MB |
| 6 | gom_Latn (gom_Latn) |
146,526 | 1,942 | 1,880,520 | 86.6 | 6606.7 | 1,668 | 1 | 5.5 MB | 18.1 MB |
| 7 | nbl_Latn (nbl_Latn) |
140,093 | 2,559 | 1,337,358 | 85.0 | 4706.4 | 2,519 | 1 | 4.8 MB | 12.7 MB |
| 8 | lua_Latn (lua_Latn) |
138,946 | 2,341 | 2,303,160 | 106.2 | 6359.2 | 2,332 | 1 | 5.1 MB | 16.0 MB |
| 9 | mni_Latn (mni_Latn) |
132,013 | 1,407 | 1,495,153 | 77.6 | 7378.1 | 1,395 | 1 | 4.2 MB | 12.5 MB |
| 10 | Swati (ssw_Latn) |
127,536 | 2,230 | 1,157,650 | 81.9 | 4741.7 | 2,115 | 1 | 4.0 MB | 16.0 MB |
| 11 | kng_Latn (kng_Latn) |
126,936 | 2,303 | 2,225,243 | 96.3 | 5363.5 | 2,198 | 1 | 4.0 MB | 12.4 MB |
| 12 | mos_Latn (mos_Latn) |
126,123 | 3,396 | 2,163,421 | 89.8 | 3372.7 | 2,240 | 1 | 4.4 MB | 13.1 MB |
| 13 | mnw_Mymr (mnw_Mymr) |
120,477 | 2,553 | 3,465,581 | 231.1 | 10945.0 | 2,340 | 1 | 5.9 MB | 18.9 MB |
| 14 | pck_Latn (pck_Latn) |
119,867 | 1,872 | 2,709,591 | 122.3 | 7892.2 | 1,871 | 1 | 5.8 MB | 17.5 MB |
| 15 | tiv_Latn (tiv_Latn) |
115,586 | 2,161 | 2,160,577 | 85.1 | 4602.0 | 2,139 | 1 | 3.6 MB | 11.6 MB |
| 16 | Romanian (ron_Cyrl) |
104,553 | 1,974 | 1,717,190 | 186.3 | 9917.3 | 1,906 | 1 | 5.4 MB | 19.5 MB |
| 17 | npi_Latn (npi_Latn) |
104,197 | 2,687 | 1,813,816 | 106.1 | 4151.9 | 2,476 | 1 | 4.2 MB | 14.2 MB |
| 18 | mdf_Cyrl (mdf_Cyrl) |
103,765 | 3,095 | 1,029,290 | 139.2 | 4700.5 | 1,783 | 1 | 3.9 MB | 13.2 MB |
| 19 | nzi_Latn (nzi_Latn) |
103,176 | 2,493 | 1,828,257 | 104.6 | 4371.2 | 2,493 | 1 | 3.7 MB | 16.3 MB |
| 20 | pam_Latn (pam_Latn) |
98,608 | 2,162 | 1,140,731 | 69.7 | 3221.8 | 2,005 | 1 | 2.9 MB | 15.0 MB |
| 21 | dak_Latn (dak_Latn) |
95,906 | 1,328 | 1,259,965 | 81.2 | 5938.5 | 412 | 1 | 1.9 MB | 19.6 MB |
| 22 | btx_Latn (btx_Latn) |
95,824 | 2,305 | 1,384,785 | 91.1 | 3828.6 | 2,294 | 1 | 3.3 MB | 10.4 MB |
| 23 | iso_Latn (iso_Latn) |
95,104 | 2,196 | 1,890,498 | 106.6 | 4658.4 | 2,186 | 1 | 3.4 MB | 11.1 MB |
| 24 | ory_Latn (ory_Latn) |
93,598 | 1,442 | 1,631,705 | 99.8 | 6543.5 | 1,319 | 1 | 3.3 MB | 12.0 MB |
| 25 | Marathi (mar_Latn) |
93,468 | 1,806 | 1,199,222 | 76.5 | 4010.5 | 1,757 | 1 | 3.1 MB | 8.9 MB |
| 26 | dag_Latn (dag_Latn) |
87,526 | 1,870 | 759,224 | 53.9 | 2567.0 | 1,035 | 1 | 1.7 MB | 6.3 MB |
| 27 | bci_Latn (bci_Latn) |
86,266 | 1,507 | 1,385,649 | 82.8 | 4794.4 | 1,503 | 1 | 2.3 MB | 8.2 MB |
| 28 | sgs_Latn (sgs_Latn) |
84,996 | 2,753 | 744,509 | 67.9 | 2127.1 | 2,382 | 1 | 2.8 MB | 12.1 MB |
| 29 | chk_Latn (chk_Latn) |
84,764 | 1,707 | 1,383,072 | 95.1 | 4770.8 | 1,685 | 1 | 2.9 MB | 10.3 MB |
| 30 | lzh_Hani (lzh_Hani) |
84,487 | 586 | 2,117,940 | 92.9 | 13440.9 | 558 | 1 | 4.0 MB | 15.2 MB |
| 31 | tvl_Latn (tvl_Latn) |
84,410 | 1,751 | 2,122,926 | 115.8 | 5630.9 | 1,737 | 1 | 3.1 MB | 10.8 MB |
| 32 | tzh_Latn (tzh_Latn) |
83,775 | 1,839 | 1,671,169 | 119.6 | 5492.4 | 1,814 | 1 | 3.4 MB | 10.8 MB |
| 33 | hmo_Latn (hmo_Latn) |
82,002 | 1,765 | 1,294,799 | 90.7 | 4259.8 | 1,750 | 1 | 2.0 MB | 9.9 MB |
| 34 | bem_Latn (bem_Latn) |
81,947 | 1,975 | 1,286,155 | 103.7 | 4342.9 | 1,787 | 1 | 3.2 MB | 9.7 MB |
| 35 | rar_Latn (rar_Latn) |
80,792 | 1,859 | 1,642,706 | 95.6 | 4197.6 | 1,818 | 1 | 2.6 MB | 8.9 MB |
| 36 | toi_Latn (toi_Latn) |
80,526 | 1,652 | 936,388 | 90.5 | 4460.7 | 1,649 | 1 | 2.7 MB | 8.0 MB |
| 37 | Old English (ang_Latn) |
76,156 | 2,213 | 910,971 | 74.5 | 2598.3 | 2,001 | 1 | 2.6 MB | 15.9 MB |
| 38 | arn_Latn (arn_Latn) |
76,026 | 2,039 | 1,123,438 | 97.2 | 3662.1 | 1,927 | 1 | 3.0 MB | 9.7 MB |
| 39 | quz_Latn (quz_Latn) |
75,347 | 1,855 | 761,653 | 84.5 | 3470.5 | 1,528 | 1 | 2.5 MB | 8.2 MB |
| 40 | tuc_Latn (tuc_Latn) |
74,425 | 1,362 | 879,671 | 65.9 | 3652.3 | 1,362 | 1 | 1.6 MB | 11.4 MB |
| 41 | zai_Latn (zai_Latn) |
74,271 | 1,720 | 1,046,360 | 86.7 | 3784.8 | 1,698 | 1 | 2.3 MB | 11.4 MB |
| 42 | srm_Latn (srm_Latn) |
74,187 | 1,937 | 1,321,052 | 80.1 | 3103.9 | 1,936 | 1 | 1.9 MB | 7.3 MB |
| 43 | mps_Latn (mps_Latn) |
74,142 | 965 | 1,268,651 | 92.7 | 7196.5 | 965 | 1 | 1.9 MB | 6.4 MB |
| 44 | gcf_Latn (gcf_Latn) |
73,747 | 2,688 | 1,059,342 | 80.3 | 2230.2 | 2,433 | 1 | 2.7 MB | 13.3 MB |
| 45 | orv_Cyrl (orv_Cyrl) |
73,447 | 1,458 | 1,632,538 | 226.1 | 11437.9 | 1,372 | 1 | 5.1 MB | 22.8 MB |
| 46 | sms_Latn (sms_Latn) |
72,503 | 2,520 | 740,057 | 106.0 | 3076.9 | 2,478 | 1 | 2.8 MB | 10.1 MB |
| 47 | Manx (glv_Latn) |
72,120 | 2,815 | 1,232,597 | 99.1 | 2563.2 | 2,462 | 1 | 3.1 MB | 16.4 MB |
| 48 | bru_Latn (bru_Latn) |
70,108 | 1,023 | 1,316,408 | 112.9 | 7808.0 | 1,021 | 1 | 2.5 MB | 10.6 MB |
| 49 | nah_Latn (nah_Latn) |
69,851 | 2,823 | 693,924 | 75.1 | 1880.9 | 2,522 | 1 | 1.9 MB | 13.2 MB |
| 50 | ach_Latn (ach_Latn) |
69,776 | 2,546 | 1,213,339 | 87.9 | 2436.0 | 2,522 | 1 | 2.4 MB | 7.9 MB |
| 51 | syc_Syrc (syc_Syrc) |
69,636 | 1,393 | 5,158,313 | 465.3 | 23311.1 | 1,307 | 1 | 9.1 MB | 31.6 MB |
| 52 | kmb_Latn (kmb_Latn) |
69,286 | 1,324 | 1,033,815 | 80.7 | 4276.5 | 1,306 | 1 | 2.0 MB | 9.9 MB |
| 53 | awa_Deva (awa_Deva) |
68,871 | 1,905 | 1,565,158 | 162.8 | 5917.8 | 1,902 | 1 | 2.9 MB | 9.9 MB |
| 54 | umb_Latn (umb_Latn) |
68,289 | 1,319 | 924,878 | 81.1 | 4251.9 | 1,315 | 1 | 2.0 MB | 6.2 MB |
| 55 | Kalmyk (xal_Cyrl) |
67,499 | 1,478 | 672,016 | 115.1 | 5301.9 | 1,424 | 1 | 2.3 MB | 10.2 MB |
| 56 | byr_Latn (byr_Latn) |
67,117 | 574 | 556,136 | 81.1 | 9601.0 | 574 | 1 | 1.4 MB | 15.1 MB |
| 57 | bjn_Latn (bjn_Latn) |
66,827 | 2,568 | 872,195 | 89.2 | 2346.2 | 2,323 | 1 | 2.5 MB | 13.5 MB |
| 58 | ubu_Latn (ubu_Latn) |
66,772 | 960 | 1,195,913 | 121.6 | 8526.1 | 960 | 1 | 2.3 MB | 13.1 MB |
| 59 | hmr_Latn (hmr_Latn) |
66,120 | 1,619 | 1,413,234 | 110.8 | 4564.9 | 1,352 | 1 | 2.9 MB | 9.2 MB |
| 60 | kos_Latn (kos_Latn) |
65,989 | 1,631 | 1,098,529 | 85.5 | 3498.6 | 1,622 | 1 | 2.1 MB | 10.9 MB |
How it works
Source (HuggingFaceFW/fineweb-2) Pipeline Output (open-index/fineweb-2-nlp)
ββββββββββββββββββββββββββββ βββββββββββββββββββββββββ βββββββββββββββββββββββββββββ
β data/{lang}/train/ β β 1. Download shard β β data/sentences/{lang}/ β
β 000_00000.parquet ββββββΆβ 2. Read 10K batches ββββββΆβ 0000.parquet β
β 000_00001.parquet β β 3. Split: β β data/paragraphs/{lang}/ β
β ... β β Β· paragraphs β β 0000.parquet β
β β β Β· sentences β β data/words/{lang}/ β
β ~5 billion docs β β Β· words + freq β β 0000.parquet β
β 1,868 lang-script pairs β β Β· ngrams + freq β β data/ngrams/{lang}/ β
β 20 TB total β β 4. Write parquet β β 0000.parquet β
β β β 5. Publish to HF β β stats.csv β
β β β 6. Delete local β β README.md (auto-generated)β
ββββββββββββββββββββββββββββ βββββββββββββββββββββββββ βββββββββββββββββββββββββββββ
Pipeline details
The pipeline processes FineWeb-2 one shard at a time to keep resource usage predictable and bounded. This is the core design principle: at no point does the pipeline need to hold more than one shard's worth of data in memory or on disk.
Download. A single source parquet shard is fetched from HuggingFace. Downloads are idempotent: if the file already exists locally with the correct size, it is skipped.
Read. The shard is streamed in batches of 10,000 rows using parquet-go. This keeps memory usage constant at roughly 20 MB regardless of shard size. Each batch of documents is distributed across parallel workers for splitting.
Split. Each worker processes its share of documents, extracting paragraphs, sentences, words, and n-grams. Workers maintain their own local frequency maps, which are merged after the batch completes. This avoids lock contention and keeps throughput high.
Write. Results are written as Zstandard-compressed Parquet files with 50,000 rows per row group. Zstandard provides excellent compression ratios on text data while remaining fast to decompress.
Publish. The output parquet files, along with an updated
stats.csvand a regeneratedREADME.md, are committed to HuggingFace in a single atomic operation. If the commit fails due to rate limiting or a transient server error, it is retried with exponential backoff.Clean up. After a successful publish, both the source and output files are deleted from local disk. This prevents disk usage from growing over time and allows the pipeline to process thousands of languages on a machine with limited storage.
Resource budgets
| Resource | Budget | How |
|---|---|---|
| Memory | ~200 MB | 10K-row read batches, frequency maps pruned at 1M entries |
| Disk | ~10 GB peak | One shard at a time, deleted after successful publish |
| Network | Sequential | One download at a time, retry on rate limit |
The pipeline is fully resumable. A stats.csv file tracks every completed shard, so
re-running the pipeline after an interruption will automatically skip all previously
finished work and continue from where it left off.
Splitting methodology
Sentence splitting
Sentence segmentation is one of the harder problems in multilingual NLP. There is no universal rule for where sentences begin and end: different languages use different punctuation conventions, and web text frequently breaks the conventions of any language.
Our approach uses a set of punctuation and casing heuristics tuned for web text across many scripts. The rules are designed to be conservative, preferring to keep text together rather than over-splitting. For short texts (under 500 characters), we use sentencex, a Wikimedia project that provides language-specific sentence boundary detection with knowledge of each language's abbreviation patterns and punctuation norms.
| Rule | Example | Behavior |
|---|---|---|
| Period + space + uppercase | world. The |
Split |
| Abbreviation + period | Mr. Smith |
No split |
| Decimal number | 3.14 is |
No split |
| Single-letter initial | J. K. Rowling |
No split |
| CJK fullstop | δΈηγδ»ε€© |
Always split |
| Devanagari danda | textΰ₯€ next |
Always split |
| Exclamation/question | really! What |
Split |
| Newline after 10+ chars | long text\nNext |
Split |
For CJK languages (Chinese, Japanese, Korean), individual Han characters, Hiragana, Katakana, and Hangul syllables are each treated as separate word tokens, reflecting the character-level structure of these writing systems. This means that a Chinese sentence like "δ»ε€©ε€©ζ°εΎε₯½" produces six word tokens rather than being treated as a single unsplittable string.
Word splitting
Word extraction follows a straightforward pipeline designed to produce clean, normalized tokens suitable for frequency analysis:
- NFC normalization (Unicode canonical composition) to ensure that equivalent character sequences are represented identically
- Lowercase conversion for case-insensitive frequency counting
- Splitting on non-letter, non-digit boundaries, while preserving apostrophes and hyphens that appear mid-word (e.g. "don't", "well-known")
- Stripping of leading and trailing punctuation
- Filtering of empty strings and pure-punctuation tokens
Paragraph splitting
FineWeb-2's source text comes from HTML pages processed by trafilatura, a web content
extraction library. In trafilatura's output, HTML <p> tags are represented as double
newlines (\n\n). We use this convention to split text into paragraphs:
- Split on sequences of two or more consecutive newlines
- Trim leading and trailing whitespace from each paragraph
- Discard fragments shorter than 20 characters, which typically correspond to navigation elements, single-word headers, or other structural debris from the original HTML
This simple approach works well in practice because trafilatura has already done the hard work of extracting meaningful content blocks from the HTML.
N-gram extraction
N-grams are extracted by sliding a window of size n over the word token sequence for each document. We compute bigrams (n=2), trigrams (n=3), 4-grams, and 5-grams.
| N | Name | Example from "the quick brown fox" |
|---|---|---|
| 2 | Bigram | "the quick", "quick brown", "brown fox" |
| 3 | Trigram | "the quick brown", "quick brown fox" |
| 4 | 4-gram | "the quick brown fox" |
| 5 | 5-gram | (needs 5+ words) |
To keep memory usage bounded, per-shard frequency maps are pruned when they exceed 1 million unique entries. During pruning, entries with a frequency of 1 are evicted first. This means that very rare n-grams in large shards may be undercounted, but the most frequent and analytically useful n-grams are preserved accurately.
Dataset card
Dataset summary
FineWeb-2 NLP provides pre-segmented versions of HuggingFace's FineWeb-2 dataset. Each of the approximately 5 billion source documents has been split into sentences, paragraphs, words, and n-grams using language-aware processing. The four resulting datasets share document IDs, so researchers can cross-reference between them: look up which sentences appear in a document, check the word frequencies for that language, or find which n-grams co-occur with a particular sentence.
The primary goal is to lower the barrier to multilingual NLP research. Instead of downloading and processing 20 TB of raw text, researchers can query exactly the slice they need, whether that is all sentences in Latin, word frequencies in Vietnamese, or bigram distributions across every language in the corpus.
Data instances
Sentence:
{
"sentence": "Gallia est omnis divisa in partes tres.",
"doc_id": "f7ef49fc-6899-4d56-aaa7-bea5924802f3",
"doc_url": "https://example.com/caesar",
"position": 0,
"language": "lat",
"language_script": "Latn"
}
Word:
{
"word": "est",
"frequency": 847,
"doc_frequency": 412,
"language": "lat",
"language_script": "Latn"
}
N-gram:
{
"ngram": "in partes",
"n": 2,
"frequency": 23,
"language": "lat",
"language_script": "Latn"
}
Curation rationale
Sentence-level and word-level datasets are foundational for many areas of NLP research. They are used to train sentence embeddings, build and evaluate language models, study word frequency distributions and Zipf's law across languages, analyze collocations and phrasal patterns, and benchmark multilingual NLP tools. Having these units pre-extracted and ready to query saves researchers significant time and computational resources, and makes it practical to work with languages that might otherwise be overlooked due to the effort required to process the raw data.
Source data
All text originates from FineWeb-2 (DOI: 10.57967/hf/3744). FineWeb-2 was constructed by extracting text from approximately 100 Common Crawl snapshots covering 2013 through 2024. The extraction pipeline includes text extraction via trafilatura, language identification using GlotLID, MinHash deduplication to remove near-duplicate documents, and adaptive quality filtering to remove low-quality content. We do not apply any additional filtering or deduplication beyond what FineWeb-2 provides.
Considerations for using the data
There are several important limitations to keep in mind when working with this dataset:
Low-resource language coverage. Many of the smaller languages in FineWeb-2 consist primarily of Bible translations, Wikipedia mirrors, and religious texts. The FineWeb-2 authors note that over 70% of language-script pairs have more than 50% of their content from such sources. Word frequencies and n-gram distributions for these languages will reflect this narrow domain rather than general language use.
Sentence splitting accuracy. The quality of sentence segmentation varies by language and script. Latin-script and CJK languages tend to produce the most accurate results, because their punctuation conventions are well-understood and widely standardized. Languages with less common scripts, or languages that use minimal punctuation, may have lower splitting accuracy.
Vietnamese word boundaries. Vietnamese is written with spaces between syllables rather than between words. As a result, compound words like "hα»c sinh" (student) are split into their component syllables "hα»c" and "sinh" rather than being kept as a single token. This is a known limitation of whitespace-based word splitting for Vietnamese.
Per-shard word frequencies. Word and n-gram frequencies are computed per source shard,
not aggregated globally. To get language-level frequencies, aggregate with
sum(frequency) GROUP BY word in DuckDB or any query engine that can read Parquet.
No additional PII filtering. This dataset does not apply any personally identifiable information filtering beyond what was already done upstream by the FineWeb-2 team. Web text inherently contains names, email addresses, and other personal information.
License
ODC-By 1.0 (Open Data Commons Attribution License), following FineWeb-2's license.
Author
Created by Duc-Tam Nguyen (tamnd) as part of the open-index project.
Citation
@misc{fineweb2nlp2026,
title = {FineWeb-2 NLP: Sentences, Paragraphs, Words, and N-grams},
author = {Nguyen, Duc-Tam},
year = {2026},
url = {https://huggingface.co/datasets/open-index/fineweb-2-nlp},
note = {Derived from FineWeb-2 (HuggingFaceFW/fineweb-2)}
}
@article{penedo2025fineweb2,
title = {FineWeb2: One Pipeline to Scale Them All},
author = {Guilherme Penedo and others},
year = {2025},
eprint = {2506.20920},
archivePrefix = {arXiv}
}
Last updated: 2026-04-15 03:59 UTC
- Downloads last month
- -