add-dataset-v1

#2
README.md CHANGED
@@ -1,7 +1,6 @@
1
  ---
2
  task_categories:
3
  - text-retrieval
4
- - image-to-text
5
  - sentence-similarity
6
  language:
7
  - en
@@ -25,12 +24,8 @@ It is specifically designed for integration with VectorDBBench.
25
 
26
  ### Dataset Summary
27
 
28
- - **Source Data**:
29
- - Wikipedia (Public Dataset)
30
- - LAION0400M (Public Dataset)
31
- - **Embedding Model**:
32
- - google/embeddinggemma-300m
33
- - sentence-transformers/clip-ViT-B-32
34
 
35
  ## Dataset Structure
36
 
@@ -39,6 +34,7 @@ For each embedding model, the directory contains two key file:
39
  | Data | Description |
40
  |-------|-------------|
41
  | `centroids.npy` | centroids as followed IVF |
 
42
 
43
  ## Data Fields
44
 
@@ -46,19 +42,35 @@ For each embedding model, the directory contains two key file:
46
 
47
  - **Purpose**: Finding the nearest clusters for IVF (Inverted File Index)
48
  - **Type**: NumPy array (`np.ndarray`)
49
- - **Shape**: `[32768, 768]` or `[1024, 512]`
50
- - **Description**: 768-dimensional vectors representing 32,768 cluster centroids, or 512-dimensional vectors representing 1,024 cluster centroids.
51
  - **Normalization**: L2-normalized (unit norm)
52
  - **Format**: float32
53
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
 
55
  ## Dataset Creation
56
 
57
  ### Source Data
58
 
59
- Source dataset is a large public dataset:
60
- - Wikipedia: [mixedbread-ai/wikipedia-data-en-2023-11](https://huggingface.co/datasets/mixedbread-ai/wikipedia-data-en-2023-11)
61
- - LAION: [LAION-400M](https://laion.ai/blog/laion-400-open-dataset/).
62
 
63
  ### Preprocessing
64
 
@@ -66,7 +78,7 @@ Source dataset is a large public dataset:
66
 
67
  Description TBD
68
 
69
- 2. Chunking (for text): For texts exceeding 2048 tokens:
70
 
71
  - Split into chunks with ~100 token overlap
72
  - Embedded each chunk separately
@@ -76,16 +88,10 @@ Source dataset is a large public dataset:
76
 
77
  ### Embedding Generation
78
 
79
- - Text:
80
- - Model: google/embeddinggemma-300m
81
- - Dimension: 768
82
- - Max Token Length: 2048
83
- - Normalization: L2-normalized
84
-
85
- - Multi-Modal:
86
- - Model: sentence-transformers/clip-ViT-B-32
87
- - Dimension: 512
88
- - Normalization: L2-normalized
89
 
90
  ## Usage
91
 
@@ -93,9 +99,10 @@ Source dataset is a large public dataset:
93
  import wget
94
 
95
  def download_centroids(embedding_model: str, dataset_dir: str) -> None:
96
- """Download pre-computed centroids for IVF_GAS."""
97
- dataset_link = f"https://huggingface.co/datasets/cryptolab-playground/gas-centroids/resolve/main/{embedding_model}"
98
  wget.download(f"{dataset_link}/centroids.npy", out="centroids.npy")
 
99
  ```
100
 
101
  ## License
@@ -128,16 +135,6 @@ If you use this dataset, please cite:
128
  }
129
  ```
130
 
131
- ```bibtex
132
- @dataset{laion400m,
133
- author = {Schuhmann, Christoph and others},
134
- title = {LAION-AI},
135
- year = {2021},
136
- publisher = {LAION},
137
- url = {https://laion.ai/blog/laion-400-open-dataset}
138
- }
139
- ```
140
-
141
  ### Embedding Model Citation
142
 
143
  ```bibtex
@@ -149,21 +146,8 @@ If you use this dataset, please cite:
149
  }
150
  ```
151
 
152
- ```bibtex
153
- @misc{clipvitb32,
154
- title={CLIP ViT-B/32},
155
- author={Open AI},
156
- year={2021},
157
- url={https://huggingface.co/sentence-transformers/clip-ViT-B-32}
158
- }
159
- ```
160
-
161
  ### Acknowledgments
162
 
163
- - Original dataset:
164
- - mixedbread-ai/wikipedia-data-en-2023-11
165
- - LAION-400M
166
- - Embedding model:
167
- - google/embeddinggemma-300m
168
- - sentence-transformers/clip-ViT-B-32
169
  - Benchmark framework: VectorDBBench
 
1
  ---
2
  task_categories:
3
  - text-retrieval
 
4
  - sentence-similarity
5
  language:
6
  - en
 
24
 
25
  ### Dataset Summary
26
 
27
+ - **Source Data**: Wikipedia (Public Dataset)
28
+ - **Embedding Model**: [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m)
 
 
 
 
29
 
30
  ## Dataset Structure
31
 
 
34
  | Data | Description |
35
  |-------|-------------|
36
  | `centroids.npy` | centroids as followed IVF |
37
+ | `tree_info.pkl` | tree metadata with parent and leaf info |
38
 
39
  ## Data Fields
40
 
 
42
 
43
  - **Purpose**: Finding the nearest clusters for IVF (Inverted File Index)
44
  - **Type**: NumPy array (`np.ndarray`)
45
+ - **Shape**: `[32768, 768]`
46
+ - **Description**: 768-dimensional vectors representing 32,768 cluster centroids
47
  - **Normalization**: L2-normalized (unit norm)
48
  - **Format**: float32
49
 
50
+ ### Tree Metadata: `tree_info.pkl`
51
+
52
+ - **Purpose**: Finding virtual clusters following hierarchical tree structure for efficient GAS search
53
+ - **Type**: Python dictionary (pickle)
54
+ - **Keys**:
55
+ - `node_parents`: Dictionary mapping each node ID to its parent node ID
56
+ - Format: `{node_id: parent_node_id, ...}`
57
+ - Contains parent-child relationships for all nodes in the tree
58
+
59
+ - `leaf_ids`: List of leaf node IDs
60
+ - Format: `[leaf_id_1, leaf_id_2, ..., leaf_id_32768]`
61
+ - Total 32,768 leaf nodes (corresponding to 32,768 centroids)
62
+
63
+ - `leaf_to_centroid_idx`: Mapping from leaf node IDs to centroid indices in `centroids.npy`
64
+ - Format: `{leaf_node_id: centroid_index, ...}`
65
+ - Maps each leaf node to its corresponding row index in `centroids.npy`
66
+ - Important: Leaf IDs in `leaf_ids` are ordered sequentially, so the i-th leaf corresponds to the i-th centroid
67
+
68
 
69
  ## Dataset Creation
70
 
71
  ### Source Data
72
 
73
+ Source dataset is a large public dataset, Wikipedia: [mixedbread-ai/wikipedia-data-en-2023-11](https://huggingface.co/datasets/mixedbread-ai/wikipedia-data-en-2023-11).
 
 
74
 
75
  ### Preprocessing
76
 
 
78
 
79
  Description TBD
80
 
81
+ 2. Chunking: For texts exceeding 2048 tokens:
82
 
83
  - Split into chunks with ~100 token overlap
84
  - Embedded each chunk separately
 
88
 
89
  ### Embedding Generation
90
 
91
+ - Model: google/embeddinggemma-300m
92
+ - Dimension: 768
93
+ - Max Token Length: 2048
94
+ - Normalization: L2-normalized
 
 
 
 
 
 
95
 
96
  ## Usage
97
 
 
99
  import wget
100
 
101
  def download_centroids(embedding_model: str, dataset_dir: str) -> None:
102
+ """Download pre-computed centroids and tree info for GAS."""
103
+ dataset_link = "https://huggingface.co/datasets/cryptolab-playground/gas-centroids/resolve/main/embeddinggemma-300m"
104
  wget.download(f"{dataset_link}/centroids.npy", out="centroids.npy")
105
+ wget.download(f"{dataset_link}/tree_info.pkl", out="tree_info.pkl")
106
  ```
107
 
108
  ## License
 
135
  }
136
  ```
137
 
 
 
 
 
 
 
 
 
 
 
138
  ### Embedding Model Citation
139
 
140
  ```bibtex
 
146
  }
147
  ```
148
 
 
 
 
 
 
 
 
 
 
149
  ### Acknowledgments
150
 
151
+ - Original dataset: mixedbread-ai/wikipedia-data-en-2023-11
152
+ - Embedding model: google/embeddinggemma-300m
 
 
 
 
153
  - Benchmark framework: VectorDBBench
clip-vit-b-32/centroids.npy → embeddinggemma-300m/tree_info.pkl RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1feead304605e263b9d0f3e35b8e10ced82a938cec222ba36f65c7e8d7e7b954
3
- size 2097280
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e0aff444db4a474220e24c5ab243f4f1bfc0c56d972333c0b9c3b1422ca3e552
3
+ size 687518