Datasets:

Formats:
parquet
Languages:
Japanese
ArXiv:
Libraries:
Datasets
Dask
License:
File size: 3,436 Bytes
347331f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57ec0bd
 
 
 
 
347331f
d5bea24
 
 
 
 
 
7bd8983
d5bea24
a16f809
d5bea24
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5f3f552
d5bea24
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10cdcd5
 
d5bea24
 
 
 
523f976
 
 
 
d5bea24
 
 
05238ae
 
 
 
 
 
 
 
 
57ec0bd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
---
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
dataset_info:
  features:
  - name: url
    dtype: string
  - name: caption
    dtype: string
  - name: similarity
    dtype: float64
  - name: page_title
    dtype: string
  - name: page_url
    dtype: string
  - name: punsafe
    dtype: float64
  - name: width
    dtype: float64
  - name: height
    dtype: float64
  - name: original_width
    dtype: float64
  - name: original_height
    dtype: float64
  - name: sha256
    dtype: string
  - name: phash
    dtype: string
  splits:
  - name: train
    num_bytes: 72405439283
    num_examples: 153942892
  download_size: 46743814850
  dataset_size: 72405439283
license: apache-2.0
language:
- ja
size_categories:
- 100M<n<1B
---

<div align="center" style="line-height: 1;">
<h1>WAON: Large-Scale and High-Quality Japanese Image-Text Pair Dataset for Vision-Language Models </h1> 


  |
  <a href="https://huggingface.co/collections/speed/waon" target="_blank">🤗 HuggingFace</a>
  &nbsp;|
  <a href="https://arxiv.org/abs/2510.22276" target="_blank">📄 Paper</a>
  &nbsp;|
  <a href="https://github.com/llm-jp/WAON" target="_blank">🧑‍💻 Code</a>
  &nbsp;|

  <br/>

<img src="validation_top1_accuracy.svg" width="50%"/>
</div>


## Introduction
WAON is a Japanese (image, text) pair dataset containing approximately 155M examples, crawled from Common Crawl.
It is built from snapshots taken in 2025-18, 2025-08, 2024-51, 2024-42, 2024-33, and 2024-26.
The dataset is high-quality and diverse, constructed through a sophisticated data processing pipeline.
We apply filtering based on image size and SigLIP scores, and perform deduplication using URLs, captions, and perceptual hashes (pHash).

## How to Use
```python
from datasets import load_dataset

ds = load_dataset("speed/WAON")
```

### Format

- `url`: URL of the image
- `caption`: Caption associated with the image
- `page_title`: Title of the page containing the image
- `page_url`: URL of the page
- `punsafe`: Probability that the image is unsafe
- `quality`: The quality of the text in the text column
- `width`: Width (in pixels) of the resized image used for computing pHash
- `height`: Height (in pixels) of the resized image used for computing pHash
- `original_width`: Original width of the image
- `original_height`: Original height of the image
- `sha256`: SHA-256 hash of the original image file
- `phash`: Perceptual hash (pHash) computed from the resized image


## Dataset Construction Pipeline

We construct WAON dataset through the following steps (The numbers in parentheses indicate the remaining data
count after each processing step (based on the 2025-18 snapshot):
<div align="center">
<img src="waon-pipeline.svg" width="50%"/>
</div>


## LICENSE
This dataset is licensed under the Apache License 2.0 and governed by Japanese law. Its use is limited to “information analysis” as defined in Article 30-4 of the Japanese Copyright Act.

## Citation

```bibtex
@misc{sugiura2025waonlargescalehighqualityjapanese,
      title={WAON: Large-Scale and High-Quality Japanese Image-Text Pair Dataset for Vision-Language Models}, 
      author={Issa Sugiura and Shuhei Kurita and Yusuke Oda and Daisuke Kawahara and Yasuo Okabe and Naoaki Okazaki},
      year={2025},
      eprint={2510.22276},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2510.22276}, 
}
```