Datasets:
JPN-Bench Source Separation
JPN-Bench source material must be kept separate from tokenizer training material. The crawler in this repo assigns whole source files to one lane:
- tokenizer training source files go to
data/source_materials/tokenizer_training_sources.jsonl - benchmark candidate source files go to
benchmarks/jpn_bench/source_materials/benchmark_sources.jsonl
When content extraction is enabled, sentence/tree text follows the same split:
- tokenizer training text goes to
data/source_materials/tokenizer_training_text.jsonl - benchmark candidate text goes to
benchmarks/jpn_bench/source_materials/benchmark_text.jsonl
The split is deterministic and based on source + source_file_id, not on
individual sentences. This avoids placing neighboring sentences from the same
article or source file into both training and benchmark lanes.
Configured Sources
The configured source list lives in configs/jpn_bench_source_materials.yaml.
Kainoki per-file downloads are cached under
data/raw/source_archives/kainoki/ so repeat crawls do not repeatedly hit the
live CGI endpoint.
NPCMJ
The NPCMJ source uses the kana bracketed tree archive:
https://www2.ninjal.ac.jp/npcmj/zip/npcmj_kana.zip
NPCMJ is a CC BY 4.0 research corpus. Keep the source file ID, archive URL, and citation metadata with any derived items.
Kainoki
The Kainoki source uses the live download index:
https://oncoj.orinst.ox.ac.uk/cgi-bin/overview.sh?db=Kainoki&mode=download
Kainoki is CC BY 4.0 and is updated over time. Treat the index as live source material; record crawl dates in artifacts if publishing a benchmark release.
Commands
Build separated source-file manifests:
uv run jpn-tokenizer crawl-source-materials \
--config configs/jpn_bench_source_materials.yaml
Download the NPCMJ archive and extract/fetch a small content sample:
uv run jpn-tokenizer crawl-source-materials \
--config configs/jpn_bench_source_materials.yaml \
--fetch-archives \
--fetch-content \
--max-content-files 20
For a full content crawl, omit --max-content-files. Do that deliberately:
Kainoki requires hundreds of HTTP requests, so keep the request delay in the
config and avoid rerunning full crawls unnecessarily.
Rules
- Never train a tokenizer on
benchmarks/jpn_bench/source_materials/. - Never promote benchmark items from source files assigned to the tokenizer training lane.
- When training with these sources, pass released benchmark JSONL files via
--exclude-benchmark. - Hidden benchmark sources should live outside the public repo and should use the same file-level split rule.
- For final benchmark items, prefer rewritten or hand-authored prompts inspired by the source material over raw copied sentences unless the license and benchmark design explicitly allow direct reuse.