Tingyang-Chen nielsr HF Staff commited on
Commit
b8a15f8
·
verified ·
1 Parent(s): 6fb7eba

Add comprehensive dataset card for Iceberg benchmark (#1)

Browse files

- Add comprehensive dataset card for Iceberg benchmark (1cb2f4b417373bd9b1b2ec1ae0c29c7830b8439d)


Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +227 -0
README.md ADDED
@@ -0,0 +1,227 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - image-classification
5
+ - text-retrieval
6
+ - other
7
+ language:
8
+ - en
9
+ tags:
10
+ - vector-similarity-search
11
+ - benchmark
12
+ - vss
13
+ - face-recognition
14
+ - recommendation-systems
15
+ ---
16
+
17
+ # Iceberg: Task-Centric Benchmarks for Vector Similarity Search
18
+
19
+ <div align=center>
20
+ <img src="https://github.com/ZJU-DAILY/Iceberg/blob/main/pictures/logo.png" width="210px">
21
+ </div>
22
+
23
+ The Iceberg benchmark was presented in the paper [Reveal Hidden Pitfalls and Navigate Next Generation of Vector Similarity Search from Task-Centric Views](https://huggingface.co/papers/2512.12980).
24
+
25
+ **Code Repository:** [https://github.com/ZJU-DAILY/Iceberg](https://github.com/ZJU-DAILY/Iceberg)
26
+
27
+ ## Introduction
28
+ Iceberg is a comprehensive benchmark suite for end-to-end evaluation of VSS (Vector Similarity Search) methods in realistic application settings. From a task-centric view, Iceberg uncovers the Information Loss Funnel, which identifies three principal sources of end-to-end performance degradation: (1) Embedding Loss during feature extraction; (2) Metric Misuse, where distances poorly reflect task relevance; (3) Data Distribution Sensitivity, highlighting index robustness across skews and modalities.
29
+
30
+ Iceberg spans 7 diverse datasets across key domains including image classification, face recognition, text retrieval, and recommendation systems. Each dataset contains 1M to 100M vectors enriched with task-specific labels and metrics, enabling evaluation of retrieval algorithms within full application pipelines—not just in isolated recall-speed scenarios. Iceberg benchmarks 13 state-of-the-art VSS algorithms and re-ranks them using task-centric performance metrics, uncovering substantial deviations from conventional recall/speed-based rankings. Morever, Iceberg propose an interpretable decision tree to guide practitioners in selecting and tuning VSS methods for specific workloads.
31
+
32
+ <div align=center>
33
+ <img src="https://github.com/ZJU-DAILY/Iceberg/blob/main/pictures/main.png" width="900px">
34
+ </div>
35
+
36
+ ## Datasets
37
+ ### Overview
38
+ | Dataset | Base Size | Dim | Query Size | Domain | Origin data source |
39
+ | :----------------------------------------------------------- | :---------- | :--- | :--------- | :------- | :------------------ |
40
+ | ImageNet-DINOv2 | 1,281,167 | 768 | 50,000 | Image Classification | https://image-net.org/index.php |
41
+ | ImageNet-EVA02 | 1,281,167 | 1024 | 50,000 | Image Classification | https://image-net.org/index.php|
42
+ | ImageNet-ConvNeXt | 1,281,167 | 1536 | 50,000 | Image Classification | https://image-net.org/index.php |
43
+ | Glink360K-IR101 | 17,091,649 | 512 | 20,000 | Face Recognition | https://github.com/deepinsight/insightface/tree/master/recognition/partial_fc#glint360k|
44
+ | Glink360K-ViT | 17,091,649 | 512 | 20,000 | Face Recognition | https://github.com/deepinsight/insightface/tree/master/recognition/partial_fc#glint360k|
45
+ | BookCorpus | 9,250,529 | 1024 | 10,000 | Text Retrieval | https://huggingface.co/datasets/bookcorpus/bookcorpus|
46
+ | Commerce | 99,085,171 | 48 | 64,111 | Recommendation | |
47
+
48
+ ### Detailed Description
49
+ #### D1: ImageNet
50
+
51
+ ImageNet is a large-scale dataset containing millions of high-resolution images spanning thousands of object categories. Each image is annotated with ground-truth labels, either manually or semi-automatically. The dataset has been widely used in the computer vision community for model training and benchmarking, particularly for image classification tasks.
52
+
53
+ **Embedding Models:**
54
+
55
+ - DINOv2: https://huggingface.co/facebook/dinov2-base
56
+ - EVA02: https://huggingface.co/timm/eva02_large_patch14_448.mim_m38m_ft_in22k_in1k
57
+ - ConvNeXt: https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384
58
+
59
+ **End Tasks:**
60
+ - Label Recall@K: It measures how many correct task-specific labels appear in the top-K retrieved results.
61
+
62
+ #### D2: Glink360K
63
+
64
+ Glint360K is a large-scale face dataset created by merging and cleaning multiple public face datasets to significantly expand both the number of identities and facial images.
65
+
66
+ **Embedding Models:**
67
+
68
+ - Resnet-IR101: https://huggingface.co/minchul/cvlface_arcface_ir101_webface4m
69
+ - ViT: https://huggingface.co/gaunernst/vit_tiny_patch8_112.arcface_ms1mv3
70
+
71
+ **End Tasks:**
72
+ - Label Recall@K: It measures how many correct task-specific labels appear in the top-K retrieved results.
73
+
74
+ #### D3: BookCorpus
75
+
76
+ BookCorpus consists of text extracted from approximately 19,000 books spanning various domains and has been curated into a high-quality corpus. The text was segmented at the paragraph level, with each paragraph concatenated into chunks containing eight sentences. This preprocessing resulted in a base dataset of 9,250,529 paragraphs. From this corpus, 10,000 paragraphs were randomly sampled to construct the query set. The unique ID of each paragraph was used as the label for its corresponding embedding vector.
77
+
78
+ **Embedding Models:**
79
+
80
+ - Stella: https://huggingface.co/NovaSearch/stella\_en\_1.5B\_v5
81
+
82
+ **End Tasks:**
83
+ - Hit@K: It measures whether the most semantic relevant paragraph is included in the top-K retrieved results.
84
+
85
+ #### D4: Commerce
86
+
87
+ Commerce dataset, derived from anonymized traffic logs of a major e-commerce platform, serves as a representative benchmark for large-scale E-commerce systems. Collected over several months, the dataset comprises 99,085,171 records of frequently purchased grocery items. In addition, a query set of 64,111 entries was constructed to represent user profiles and associated search keywords. Each query is linked to a sequence of high-popularity items, enabling evaluation on downstream recommendation tasks. Item IDs are used as labels throughout the dataset.
88
+
89
+ **Embedding Models:**
90
+
91
+ - ResFlow: https://github.com/FuCongResearchSquad/ResFlow
92
+
93
+ **End Tasks:**
94
+ - Matching Score@K: It measures whether the vectors retrieved by a query are both relevant and popular, as well as the cumulative popularity of those items.
95
+
96
+ ## Supported Algorithms
97
+
98
+ | | Metric | Type | Original Code Link |
99
+ | :------ | :------------------ | :-------------- | :----------------------------------------------------------- |
100
+ | Fargo | Inner Product | Parition-based | https://github.com/Jacyhust/FARGO_VLDB23 |
101
+ | ScaNN | Inner Product | Parition-based | https://github.com/google-research/google-research/tree/master/scann |
102
+ | ip-NSW | Inner Product | Graph-based | https://github.com/stanis-morozov/ip-nsw |
103
+ | ip-NSW+ | Inner Product | Graph-based | https://github.com/jerry-liujie/ip-nsw/tree/GraphMIPS |
104
+ | Mobius | Inner Product | Graph-based | Our own implementation |
105
+ | NAPG | Inner Product | Graph-based | Our own implementation |
106
+ | MAG | Inner Product | Graph-based | https://github.com/ZJU-DAILY/MAG |
107
+ | RaBitQ | Euclidean Distance | Parition-based | https://github.com/VectorDB-NTU/RaBitQ-Library |
108
+ | IVFPQ | Euclidean Distance | Parition-based | https://github.com/facebookresearch/faiss |
109
+ | DB-LSH | Euclidean Distance | Parition-based | https://github.com/Jacyhust/DB-LSH |
110
+ | HNSW | Euclidean Distance | Graph-based | https://github.com/nmslib/hnswlib |
111
+ | NSG | Euclidean Distance | Graph-based | https://github.com/ZJULearning/nsg |
112
+ | Vamana | Euclidean Distance | Graph-based | https://github.com/microsoft/DiskANN |
113
+
114
+ ## Quick Start
115
+
116
+ ### Clone the repository
117
+
118
+ ```bash
119
+ git clone project
120
+ ```
121
+ ### Environment Requirements
122
+
123
+ ```bash
124
+ Python 3.10+; docker; pyyaml
125
+ ```
126
+ Run `pip install -r requirements.txt`.
127
+
128
+ ### Run the benchmark
129
+ **Example**: We use HNSW for the ImageNet dataset as an example to run the benchmark.
130
+
131
+ - **Configure the dataset** (config/dataset.yaml):
132
+
133
+ ```yaml
134
+ imagenet1k_avg:
135
+ dataset_type: imagenet
136
+ data_pre: imagenet-1k
137
+ train_name: convnext-avg-pool-train.bin
138
+ test_name: convnext-avg-pool-validation.bin
139
+ train_path: /workspace/data/imagenet-1k/convnext-avg-pool-train.bin
140
+ test_path: /workspace/data/imagenet-1k/convnext-avg-pool-validation.bin
141
+ prefix: convnext-avg-pool
142
+ data_dim: 1536
143
+ k: 100
144
+ data_num: 1281167
145
+ query_num: 50000
146
+ ```
147
+
148
+ - **Configure the algorithm** (config/algorithm.yaml)
149
+
150
+ ```yaml
151
+ hnsw:
152
+ efc: 256
153
+ M: 32
154
+ efs: [100, 200, 300, 400, 500, 600, 800, 1000, 1500]
155
+ type: nn
156
+ ```
157
+
158
+ Configuration parameters:
159
+ - `efc`: build parameter for HNSW
160
+ - `M`: build parameter for HNSW
161
+ - `efs`: search parameter for HNSW
162
+ - `type`: distance metric type
163
+
164
+
165
+ - **run the algorithm & evaluation**
166
+ 1. Configure the dataset and algorithm parameters in `config/dataset.yaml` and `config/algorithm.yaml`
167
+ 2. Run the algorithm using: `python3 run.py hnsw imagenet1k_dinov2 --mode build/search`
168
+ 3. For more configuration options, refer to: `python run.py --help`
169
+
170
+ ## Pipeline
171
+ <div align=center>
172
+ <img src="https://github.com/ZJU-DAILY/Iceberg/blob/main/pictures/pipeline.png" width="900px">
173
+ </div>
174
+
175
+ ## Results
176
+
177
+ ### Iceberg LeaderBoard 1.0
178
+ <div align=center>
179
+ <img src="https://github.com/ZJU-DAILY/Iceberg/blob/main/pictures/sigmod26/leaderboard.png" width="900px">
180
+ </div>
181
+
182
+ ### Task-centric performance versus two similarity metrics
183
+
184
+ <div align=center>
185
+ <img src="https://github.com/ZJU-DAILY/Iceberg/blob/main/pictures/sigmod26/ImageNet-EVA02_metric.png" width="900px">
186
+ </div>
187
+
188
+ <div align=center>
189
+ <img src="https://github.com/ZJU-DAILY/Iceberg/blob/main/pictures/sigmod26/ImageNet-ConvNeXt_metric.png" width="900px">
190
+ </div>
191
+
192
+ <div align=center>
193
+ <img src="https://github.com/ZJU-DAILY/Iceberg/blob/main/pictures/sigmod26/Glink360K-IR101_metric.png" width="900px">
194
+ </div>
195
+
196
+ <div align=center>
197
+ <img src="https://github.com/ZJU-DAILY/Iceberg/blob/main/pictures/sigmod26/BookCorpus_metric.png" width="900px">
198
+ </div>
199
+
200
+ ### Query Performance on Synthetic Recall@100
201
+
202
+ <div align=center>
203
+ <img src="https://github.com/ZJU-DAILY/Iceberg/blob/main/pictures/sigmod26/ImageNet-DINOv2.png" width="900px">
204
+ </div>
205
+
206
+ <div align=center>
207
+ <img src="https://github.com/ZJU-DAILY/Iceberg/blob/main/pictures/sigmod26/Glink360k-IR101.png" width="900px">
208
+ </div>
209
+
210
+ <div align=center>
211
+ <img src="https://github.com/ZJU-DAILY/Iceberg/blob/main/pictures/sigmod26/BookCorpus.png" width="900px">
212
+ </div>
213
+
214
+ <div align=center>
215
+ <img src="https://github.com/ZJU-DAILY/Iceberg/blob/main/pictures/sigmod26/Commerce.png" width="900px">
216
+ </div>
217
+
218
+ ## Citation
219
+ ```bibtex
220
+ @article{chen2025iceberg,
221
+ title={Reveal Hidden Pitfalls and Navigate Next Generation of Vector Similarity Search from Task-Centric Views},
222
+ author={Chen, Tingyang and Fu, Cong and Wu, Jiahua and Wu, Haotian and Fan, Hua and Ke, Xiangyu and Gao, Yunjun and Ni, Yabo and Zeng, Anxiang},
223
+ journal={arXiv preprint arXiv:2512.12980},
224
+ year={2025},
225
+ url={https://arxiv.org/abs/2512.12980},
226
+ }
227
+ ```