Kaichengalex commited on
Commit
9b94de3
·
verified ·
1 Parent(s): d1fcb95

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +260 -0
README.md CHANGED
@@ -20,4 +20,264 @@ configs:
20
  path: data/train-*
21
  license: cc-by-4.0
22
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
 
20
  path: data/train-*
21
  license: cc-by-4.0
22
  ---
23
+ <div align="center">
24
+
25
+ <img src="Figures/danqing.svg" width="30%">
26
+
27
+ **100M** Chinese image-text pairs | **12TB** dataset | **2024-2025** web data
28
+
29
+ <h1 align="center">DanQing: An Up-to-Date Large-Scale Chinese Vision-Language Pre-training Dataset</h1>
30
+
31
+ </div>
32
+
33
+
34
+ <div align="center">
35
+
36
+ Hengyu Shen<sup>∗</sup>, [**Tiancheng Gu**](https://scholar.google.com/citations?hl=zh-CN&user=9etrpbYAAAAJ)<sup>∗</sup>, Bin Qin, Lan Wu, Yuling Wu, Shuo Tan, [**Zelong Sun**](https://scholar.google.com/citations?user=mDxuGMgAAAAJ&hl=zh-CN), Jun Wang, Nan Wu, [**Xiang An**](https://anxiangsir.github.io/), [**Weidong Cai**](https://weidong-tom-cai.github.io/), [**Ziyong Feng**](https://scholar.google.com/citations?user=xlKttUEAAAAJ&hl=zh-CN)<sup>‡</sup>, [**Kaicheng Yang**](https://kaicheng-yang0828.github.io)<sup>†</sup>
37
+
38
+ <sup>∗</sup> Equal Contribution | <sup>‡</sup> Team Leader | <sup>†</sup> Project Leader
39
+
40
+ [![Paper](https://img.shields.io/badge/📄-Paper-red)]()
41
+ [![Hugging Face](https://img.shields.io/badge/🤗%20Hugging%20Face-Dataset-yellow)](https://huggingface.co/datasets/DeepGlint-AI/DanQing100M)
42
+ [![ModelScope](https://img.shields.io/badge/ModelScope-Dataset-blue)](https://www.modelscope.cn/datasets/deepglint/DanQing)
43
+ [![License: CC BY 4.0](https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svg)](https://creativecommons.org/licenses/by/4.0/)
44
+
45
+
46
+ </div>
47
+
48
+ ## 📣 News
49
+
50
+ <div align="left">
51
+
52
+ - [2026/01/16] ✨ We release the [paper]() of DanQing.
53
+ - [2026/01/15] 🔥 We release the DanQing dataset (images and captions, about 12TB) in [ModelScope](https://www.modelscope.cn/datasets/deepglint/DanQing)
54
+ - [2026/01/13] ✨ We release the DanQing dataset (URLs of image and captions) in [🤗 Hugging Face](https://huggingface.co/datasets/DeepGlint-AI/DanQing100M)
55
+
56
+ > ⚠️ **Note:** Due to the storage and transmission limitations of Hugging Face, we only release the URLs corresponding to the images on Hugging Face. To access the complete dataset, please download it from **ModelScope**.
57
+
58
+ > 💡 **Synthetic Descriptions:** We also provide synthetic short captions (generated by GLM4.1-base-9B) for the Danqing100M dataset in the recaption column.
59
+
60
+ </div>
61
+
62
+ ---
63
+
64
+ ## 📑 Table of Contents
65
+ - [💡 Highlights](#-highlights)
66
+ - [💻 Dataset Information](#-dataset-information)
67
+ - [Data Preview](#data-preview)
68
+ - [Topic Assessment](#topic-assessment)
69
+ - [Image Resolution and Text Length Distribution](#image-resolution-and-text-length-distribution)
70
+ - [Text Quality](#text-quality)
71
+ - [Cosine Similarity and Semantic Distribution](#cosine-similarity-and-semantic-distribution)
72
+ - [📊 Performance Comparison](#-performance-comparison)
73
+ - [Zero-Shot Classification](#zero-shot-classification)
74
+ - [Cross-Modal Retrieval (Short Caption)](#cross-modal-retrieval-short-caption)
75
+ - [Cross-Modal Retrieval (Long Caption)](#cross-modal-retrieval-long-caption)
76
+ - [Chinese-Centric Large Multimodal Model Tasks](#chinese-centric-large-multimodal-model-tasks)
77
+ - [🧠 Analysis](#-analysis)
78
+ - [Data and Model Scaling](#data-and-model-scaling)
79
+ - [New Concept Understanding](#new-concept-understanding)
80
+ - [📥 Download](#-download)
81
+ - [🤗 Hugging Face](#-hugging-face)
82
+ - [Python API](#python-api)
83
+ - [Command Line](#command-line)
84
+ - [ ModelScope](#-modelscope)
85
+ - [Python API](#python-api-1)
86
+ - [Command Line](#command-line-1)
87
+ - [📄 License](#-license)
88
+ - [📝 Citation](#-citation)
89
+
90
+ ---
91
+
92
+ ## 💡 Highlights
93
+
94
+ In this paper, we propose **DanQing** dataset, which contains **100 million** image-text pairs collected from Common Crawl. Different from existing datasets, DanQing is curated through a more rigorous selection process, yielding superior data quality. Moreover, DanQing is primarily built from **2024–2025** web data, enabling models to better capture evolving semantic trends and thus offering greater practical utility.
95
+
96
+ We compare DanQing with existing datasets by conducting continual pre-training of the SigLIP2 model. Experimental results show that DanQing consistently achieves superior performance across a range of Chinese downstream tasks, including zero-shot classification, cross-modal retrieval, and LMM-based evaluations.
97
+
98
+ <div align="center">
99
+ <img src="Figures/framework.png" width="100%">
100
+ </div>
101
+
102
+ ---
103
+
104
+ ## 💻 Dataset Information
105
+
106
+ ### Data Preview
107
+
108
+ <div align="center">
109
+ <img src="Figures/case.png" width="100%">
110
+ </div>
111
+
112
+ ### Topic Assessment
113
+
114
+ We implement a topic modeling pipeline based on [BERTopic](https://github.com/MaartenGr/BERTopic). We randomly sample 10M image-text pairs and extract text embeddings using [Chinese-CLIP-L/14](https://github.com/OFA-Sys/Chinese-CLIP). To address high-dimensional clustering, we apply UMAP for dimensionality reduction, followed by HDBSCAN to identify semantic clusters with a minimum cluster size of 1,000 for stability and noise reduction. Finally, we use class-based TF-IDF to extract representative keywords for each topic.
115
+
116
+ <div align="center">
117
+ <img src="Figures/topic_examples.png" width="100%">
118
+ </div>
119
+
120
+ ### Image Resolution and Text Length Distribution
121
+
122
+ We analyze image resolutions by width, height, and minimum dimension, demonstrating a wide range of visual scales. We also report the distribution of text lengths across **2.2B** Chinese words.
123
+
124
+ <div align="center">
125
+ <img src="Figures/statistic.png" width="100%">
126
+ </div>
127
+
128
+ ### Text Quality
129
+
130
+ We evaluate the text quality of DanQing using two metrics: **semantic word density** and **perplexity (PPL)**. We randomly sample 10M texts from DanQing, Wukong, and Zero for comparison. Semantic words (nouns, verbs, adjectives) are identified using the jieba toolkit, and their proportion in each sentence is calculated as semantic density. Sentence-level perplexity is computed with a pre-trained Chinese [BERT](https://huggingface.co/google-bert/bert-base-chinese) model.
131
+
132
+ <div align="center">
133
+ <img src="Figures/quality.png" width="100%">
134
+ </div>
135
+
136
+ ### Cosine Similarity and Semantic Distribution
137
+
138
+ We analyze 10M-sample subsets of DanQing and Wukong by presenting image-text similarity distributions, extracted with [FG-CLIP2-L/16@256](https://huggingface.co/qihoo360/fg-clip2-large). For semantic distribution comparison, 10M images from each dataset are clustered into 10K groups using [FAISS](https://github.com/facebookresearch/faiss), with clusters ranked by sample count.
139
+
140
+ <div align="center">
141
+ <img src="Figures/distribution.png" width="100%">
142
+ </div>
143
+
144
+ ---
145
+
146
+ ## 📊 Performance Comparison
147
+
148
+ ### Zero-Shot Classification
149
+
150
+ <div align="center">
151
+ <img src="Figures/classification.png" width="80%">
152
+ </div>
153
+
154
+ ### Cross-Modal Retrieval (Short Caption)
155
+
156
+ <div align="center">
157
+ <img src="Figures/short.png" width="100%">
158
+ </div>
159
+
160
+ ### Cross-Modal Retrieval (Long Caption)
161
+
162
+ <div align="center">
163
+ <img src="Figures/long.png" width="100%">
164
+ </div>
165
+
166
+ ### Chinese-Centric Large Multimodal Model Tasks
167
+
168
+ <div align="center">
169
+ <img src="Figures/LMM.png" width="80%">
170
+ </div>
171
+
172
+ ---
173
+
174
+ ## 🧠 Analysis
175
+
176
+ ### Data and Model Scaling
177
+
178
+ We compare the data and model scaling capabilities of DanQing and Wukong, reporting average zero-shot classification and retrieval (long & short caption) performance in the figure below.
179
+
180
+ <div align="center">
181
+ <img src="Figures/scaling.png" width="100%">
182
+ </div>
183
+
184
+ ### New Concept Understanding
185
+
186
+ We evaluate SigLIP2-L/16 models pre-trained on various Chinese datasets for emergent concept understanding, and find that the model trained on DanQing consistently gives the highest confidence to correct pairs.
187
+
188
+ <div align="center">
189
+ <img src="Figures/new_concept.png" width="100%">
190
+ </div>
191
+
192
+ ---
193
+
194
+ ## 📥 Download
195
+
196
+ ### 🤗 Hugging Face
197
+
198
+ #### Python API
199
+
200
+ ```python
201
+ from datasets import load_dataset
202
+
203
+ ds = load_dataset("DeepGlint-AI/DanQing100M")
204
+ ```
205
+
206
+ #### Command Line
207
+
208
+ ```bash
209
+ # Install dependencies
210
+ # brew install git-xet # macOS
211
+ # git xet install
212
+
213
+ # sudo apt update # Ubuntu/Debian
214
+ # sudo apt install aria2
215
+
216
+ # Install git-lfs
217
+ # curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
218
+ # sudo apt-get install git-lfs
219
+ # git lfs install
220
+
221
+ # Download dataset URLs and captions
222
+ bash hfd.sh DeepGlint-AI/DanQing100M --dataset --tool aria2c -x 10
223
+
224
+ # Download images using img2dataset
225
+ # pip install img2dataset
226
+ # For better performance, it's highly recommended to set up a fast dns resolver
227
+ # See: https://github.com/rom1504/img2dataset#setting-up-a-high-performance-dns-resolver
228
+ img2dataset --url_list DanQing100M/data \
229
+ --input_format "parquet" \
230
+ --url_col "url" \
231
+ --caption_col "alt_text" \
232
+ --output_format webdataset \
233
+ --output_folder DanQing100M-webdataset \
234
+ --processes_count 16 \
235
+ --thread_count 32 \
236
+ --image_size 256 \
237
+ --resize_only_if_bigger=True \
238
+ --resize_mode="keep_ratio" \
239
+ --skip_reencode=True \
240
+ --save_additional_columns '["recaption"]' \
241
+ --enable_wandb False
242
+ ```
243
+
244
+ ### <img src="Figures/modelscope.png" alt="ModelScope" style="width:16px; height:12px;"/> ModelScope
245
+
246
+ #### Python API
247
+
248
+ ```python
249
+ from modelscope.msdatasets import MsDataset
250
+
251
+ ds = MsDataset.load('deepglint/DanQing')
252
+ ```
253
+
254
+ #### Command Line
255
+
256
+ ```bash
257
+ pip install modelscope
258
+ modelscope download --dataset deepglint/DanQing
259
+ ```
260
+
261
+ ---
262
+
263
+ ## 📄 License
264
+
265
+ The DanQing dataset is licensed under [CC-BY-4.0 License](https://creativecommons.org/licenses/by/4.0/). The full license can be found in the [LICENSE.cc-by-4.0 file](./LICENSE.cc-by-4.0). The dataset is collected from Common Crawl web pages and may contain biased or sensitive content. The collected data is subject to the license to which each content belongs. Users are solely responsible for ensuring compliance with ethical and legal standards in their research or applications.
266
+
267
+ ---
268
+
269
+ ## 📝 Citation
270
+
271
+ If you find this repository useful, please use the following BibTeX entry for citation.
272
+
273
+ ```bibtex
274
+ Coming Soon
275
+ ```
276
+ ---
277
+
278
+ <div align="center">
279
+
280
+ ### ⭐ Don't forget to star this repository if you find it helpful!
281
+
282
+ </div>
283