The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
iso_639_3: string
script: string
language_code: string
language_name: string
character: string
unicode_category: string
unicode_name: string
total_frequency_all_time: int64
time_periods_with_data_count: int64
time_periods_list: string
year_2013_frequency: int64
year_2014_frequency: int64
year_2016_frequency: int64
year_2017_frequency: int64
year_2018_frequency: int64
year_2019_frequency: int64
year_2020_frequency: int64
year_2023_frequency: int64
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 2628
to
{'iso_639_3': Value('string'), 'script': Value('string'), 'language_code': Value('string'), 'language_name': Value('string'), 'character': Value('string'), 'unicode_category': Value('string'), 'unicode_name': Value('string'), 'total_frequency_all_time': Value('int64'), 'time_periods_with_data_count': Value('int64'), 'time_periods_list': Value('string'), 'year_2016_frequency': Value('int64'), 'year_2017_frequency': Value('int64'), 'year_2018_frequency': Value('int64'), 'year_2019_frequency': Value('int64'), 'year_2020_frequency': Value('int64')}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2431, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1975, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 503, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 350, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 106, in _generate_tables
yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 73, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
iso_639_3: string
script: string
language_code: string
language_name: string
character: string
unicode_category: string
unicode_name: string
total_frequency_all_time: int64
time_periods_with_data_count: int64
time_periods_list: string
year_2013_frequency: int64
year_2014_frequency: int64
year_2016_frequency: int64
year_2017_frequency: int64
year_2018_frequency: int64
year_2019_frequency: int64
year_2020_frequency: int64
year_2023_frequency: int64
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 2628
to
{'iso_639_3': Value('string'), 'script': Value('string'), 'language_code': Value('string'), 'language_name': Value('string'), 'character': Value('string'), 'unicode_category': Value('string'), 'unicode_name': Value('string'), 'total_frequency_all_time': Value('int64'), 'time_periods_with_data_count': Value('int64'), 'time_periods_list': Value('string'), 'year_2016_frequency': Value('int64'), 'year_2017_frequency': Value('int64'), 'year_2018_frequency': Value('int64'), 'year_2019_frequency': Value('int64'), 'year_2020_frequency': Value('int64')}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
FineFreq is a large-scale multilingual character frequency dataset derived from 96.6 trillion characters across 1900+ languages, built from the FineWeb and FineWeb2 corpora. It provides per-language frequency tables with year-level temporal resolution, covering the period 2013–2025.
Dataset Structure
Each file (per language-script pair) contains fields as follows:
| Column | Description |
|---|---|
iso_639_3 |
ISO 639‑3 language code |
script |
Unicode script (e.g. Latn, Cyrl, Hani) |
language_code |
Combined language-script tag (e.g. eng_Latn) |
language_name |
Human-readable language name |
character |
Unicode character |
unicode_category |
Unicode general category (e.g. Ll, Lo, Pd) |
unicode_name |
Official Unicode character name |
total_frequency_all_time |
Total count of the character across all available years |
time_periods_with_data_count |
Number of years with observed data |
time_periods_list |
List of years in which the character appeared |
year_20XX_frequency |
Character frequency per year (e.g. year_2017_frequency) |
Not all languages have full year coverage. Frequencies are raw counts (not percentages), normalized later as needed.

Two manifest files are included for quick indexing:
manifest/languages.csvmanifest/languages.parquet
Each row contains metadata per language-script pair:
| Field | Description |
|---|---|
iso_639_3 |
ISO 639‑3 code |
language_code |
Language-script ID |
script |
Script used |
language_name |
Full language name |
total_frequency |
Sum of all character frequencies |
data_source |
FineWeb or FineWeb2 |
years_count |
Number of years with data |
min_year, max_year |
First and last year of data coverage |
Usage
Load a single language dataset (e.g. English)
Use the standard Hugging Face load_dataset function to load data for any language.
from datasets import load_dataset
# Load character frequency data for English (eng_Latn)
ds = load_dataset(
"lgi2p/finefreq",
data_files="DATA/eng_Latn/eng_Latn.parquet", # Use .csv for CSV format
split="train"
)
# Convert to pandas DataFrame for analysis
df = ds.to_pandas()
# Display the top 20 characters by total frequency
df_sorted = df.sort_values("total_frequency_all_time", ascending=False)
print(df_sorted[["character", "unicode_name", "total_frequency_all_time"]].head(20))
Browse available languages
Before loading data, you can check which languages are available in the dataset.
import pandas as pd
from huggingface_hub import hf_hub_download
# Download and read the manifest file
manifest_path = hf_hub_download(
repo_id="lgi2p/finefreq",
filename="manifest/languages.csv",
repo_type="dataset"
)
manifest_df = pd.read_csv(manifest_path)
# Show a summary of available languages
print(f"Total languages available: {len(manifest_df)}")
print("\nLanguages with the most character occurrences:")
print(manifest_df.nlargest(10, 'total_frequency')[['language_code', 'script', 'total_frequency']])
Optional: Helper function to load any language by code
def load_finefreq(language_code: str, format="parquet"):
"""
Load a FineFreq language dataset.
Args:
language_code: Language identifier (e.g., 'eng_Latn', 'fra_Latn')
format: File format ('parquet' or 'csv')
Returns:
pandas.DataFrame with character frequency data
"""
from datasets import load_dataset
ext = "csv" if format == "csv" else "parquet"
ds = load_dataset(
"lgi2p/finefreq",
data_files=f"DATA/{language_code}/{language_code}.{ext}",
split="train"
)
return ds.to_pandas()
# Example usage
df_english = load_finefreq("eng_Latn")
df_french = load_finefreq("fra_Latn")
Notes
- Replace
"eng_Latn"with any language code listed inmanifest/languages.csv. - Both
.parquetand.csvfiles are available for each language. - Parquet is recommended for faster loading; CSV is human-readable and previewable on the Hugging Face Hub.
- Each language folder also contains a
metadata.jsonfile with summary statistics and source information.
Citation
@article{Xu2025finefreq,
title={FineFreq: A Multilingual Character Frequency Dataset from Web-Scale Text},
author={Binbin Xu},
journal={arXiv preprint arXiv:2512.09701},
year={2025},
url = {https://arxiv.org/abs/2512.09701}
}
- Downloads last month
- 3