Datasets:

Languages:
Japanese
ArXiv:
License:
WAON / README.md
speed's picture
Update README.md
fad1bc8 verified
---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: url
dtype: string
- name: caption
dtype: string
- name: similarity
dtype: float64
- name: page_title
dtype: string
- name: page_url
dtype: string
- name: punsafe
dtype: float64
- name: width
dtype: float64
- name: height
dtype: float64
- name: original_width
dtype: float64
- name: original_height
dtype: float64
- name: sha256
dtype: string
- name: phash
dtype: string
splits:
- name: train
num_bytes: 72405439283
num_examples: 153942892
download_size: 46743814850
dataset_size: 72405439283
license: apache-2.0
language:
- ja
size_categories:
- 100M<n<1B
---
<div align="center" style="line-height: 1;">
<h1>WAON: Large-Scale and High-Quality Japanese Image-Text Pair Dataset for Vision-Language Models </h1>
|
<a href="https://huggingface.co/collections/llm-jp/waon" target="_blank">🤗 HuggingFace</a>
&nbsp;|
<a href="https://arxiv.org/abs/2510.22276" target="_blank">📄 Paper</a>
&nbsp;|
<a href="https://github.com/llm-jp/WAON" target="_blank">🧑‍💻 Code</a>
&nbsp;|
<br/>
<img src="validation_top1_accuracy.svg" width="50%"/>
</div>
## Introduction
WAON is a Japanese (image, text) pair dataset containing approximately 155M examples, crawled from Common Crawl.
It is built from snapshots taken in 2025-18, 2025-08, 2024-51, 2024-42, 2024-33, and 2024-26.
The dataset is high-quality and diverse, constructed through a sophisticated data processing pipeline.
We apply filtering based on image size and SigLIP scores, and perform deduplication using URLs, captions, and perceptual hashes (pHash).
## How to Use
Clone the repository:
```bash
git clone https://gitlab.llm-jp.nii.ac.jp/datasets/waon.git
cd waon
```
Load the dataset using the `datasets` library:
```python
from datasets import load_dataset
ds = load_dataset("parquet", data_dir="data")
```
### Format
- `url`: URL of the image
- `caption`: Caption associated with the image
- `page_title`: Title of the page containing the image
- `page_url`: URL of the page
- `punsafe`: Probability that the image is unsafe
- `quality`: The quality of the text in the text column
- `width`: Width (in pixels) of the resized image used for computing pHash
- `height`: Height (in pixels) of the resized image used for computing pHash
- `original_width`: Original width of the image
- `original_height`: Original height of the image
- `sha256`: SHA-256 hash of the original image file
- `phash`: Perceptual hash (pHash) computed from the resized image
## Dataset Construction Pipeline
We construct WAON dataset through the following steps (The numbers in parentheses indicate the remaining data
count after each processing step (based on the 2025-18 snapshot):
<div align="center">
<img src="waon-pipeline.svg" width="50%"/>
</div>
## LICENSE
This dataset (not including images themselves) is licensed under the Apache License 2.0 and governed by Japanese law. Its use is limited to “information analysis” as defined in Article 30-4 of the Japanese Copyright Act.
## Citation
```bibtex
@misc{sugiura2025waonlargescalehighqualityjapanese,
title={WAON: Large-Scale and High-Quality Japanese Image-Text Pair Dataset for Vision-Language Models},
author={Issa Sugiura and Shuhei Kurita and Yusuke Oda and Daisuke Kawahara and Yasuo Okabe and Naoaki Okazaki},
year={2025},
eprint={2510.22276},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2510.22276},
}
```