Update README.md
Browse files
README.md
CHANGED
|
@@ -20,15 +20,22 @@ dataset_info:
|
|
| 20 |
sequence: list
|
| 21 |
---
|
| 22 |
|
| 23 |
-
#
|
| 24 |
|
| 25 |
## Dataset Description
|
| 26 |
|
| 27 |
- **GitHub Repository:** (Coming soon)
|
|
|
|
| 28 |
|
| 29 |
**MOMIJI** (**M**odern **O**pen **M**ult**i**modal **J**apanese f**i**ltered Dataset) is a large-scale, carefully curated public dataset of image-text–interleaved web documents. The dataset was extracted from Common Crawl dumps covering February 2024 – January 2025 and contains roughly **56 million** Japanese documents, **110 billion** characters, and **2.49 billion** images. Details of the collection and filtering pipeline will be described in a forthcoming paper.
|
| 30 |
|
| 31 |
-
Image-text–interleaved data is
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 32 |
|
| 33 |
## Data Fields
|
| 34 |
|
|
@@ -91,11 +98,11 @@ MOMIJI contains about **56 million** Japanese web documents. Because images are
|
|
| 91 |
|
| 92 |
Below is a bar chart of document counts by number of images (documents with ≥ 30 images are omitted for readability):
|
| 93 |
|
| 94 |
-
<img src="
|
| 95 |
|
| 96 |
## Content Warning
|
| 97 |
|
| 98 |
-
Although an NSFW filter was applied, some links or text samples may still be disturbing. The dataset is intended for scientific or safety analyses by trained researchers
|
| 99 |
|
| 100 |
## Disclaimer
|
| 101 |
|
|
@@ -107,4 +114,4 @@ MOMIJI does **NOT** distribute image binaries—only links and metadata. We are
|
|
| 107 |
|
| 108 |
## Acknowledgements
|
| 109 |
|
| 110 |
-
This model is based on results obtained from a project, JPNP20017, subsidized by the New Energy and Industrial Technology Development Organization (NEDO).
|
|
|
|
| 20 |
sequence: list
|
| 21 |
---
|
| 22 |
|
| 23 |
+
# Dataset Card for MOMIJI
|
| 24 |
|
| 25 |
## Dataset Description
|
| 26 |
|
| 27 |
- **GitHub Repository:** (Coming soon)
|
| 28 |
+
- **Paper:** (Coming soon)
|
| 29 |
|
| 30 |
**MOMIJI** (**M**odern **O**pen **M**ult**i**modal **J**apanese f**i**ltered Dataset) is a large-scale, carefully curated public dataset of image-text–interleaved web documents. The dataset was extracted from Common Crawl dumps covering February 2024 – January 2025 and contains roughly **56 million** Japanese documents, **110 billion** characters, and **2.49 billion** images. Details of the collection and filtering pipeline will be described in a forthcoming paper.
|
| 31 |
|
| 32 |
+
Image-text–interleaved data is generally used to train large vision-language models (LVLMs) such as [LLaVA-OneVision](https://arxiv.org/abs/2408.03326), [Idefics 2](https://arxiv.org/abs/2405.02246), [NVILA](https://arxiv.org/abs/2412.04468), and [Qwen 2.5-VL](https://arxiv.org/abs/2502.13923). Using MOMIJI, we trained our proposed model **[Heron-NVILA-Lite](https://huggingface.co/turing-motors/Heron-NVILA-Lite-15B)**.
|
| 33 |
+
|
| 34 |
+
Following [OBELICS](https://huggingface.co/datasets/HuggingFaceM4/OBELICS), we provide an [interactive visualization](https://atlas.nomic.ai/data/onely7/momiji-1m/map) that allows users to explore the contents of MOMIJI. The map shows a subset of 1M of the 56M documents.
|
| 35 |
+
|
| 36 |
+
[](https://atlas.nomic.ai/data/onely7/momiji-1m/map)
|
| 37 |
+
|
| 38 |
+
> **Warning and Disclaimer:** This content may unintentionally include expressions or information that some may find inappropriate. Please view it at your own discretion and responsibility.
|
| 39 |
|
| 40 |
## Data Fields
|
| 41 |
|
|
|
|
| 98 |
|
| 99 |
Below is a bar chart of document counts by number of images (documents with ≥ 30 images are omitted for readability):
|
| 100 |
|
| 101 |
+
<img src="assets/momiji_document_count_by_num_of_images.png" width="90%">
|
| 102 |
|
| 103 |
## Content Warning
|
| 104 |
|
| 105 |
+
Although an NSFW filter was applied, some links or text samples may still be disturbing. The dataset is intended for scientific or safety analyses by trained researchers.
|
| 106 |
|
| 107 |
## Disclaimer
|
| 108 |
|
|
|
|
| 114 |
|
| 115 |
## Acknowledgements
|
| 116 |
|
| 117 |
+
This model is based on results obtained from a project, JPNP20017, subsidized by the New Energy and Industrial Technology Development Organization (NEDO).
|