ashvardanian commited on
Commit
2fa729a
·
verified ·
1 Parent(s): 09d4f21

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. LICENSE +47 -0
  2. README.md +98 -3
LICENSE ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Open Use of Data Agreement v1.0
2
+
3
+ This is the Open Use of Data Agreement, Version 1.0 (the "O-UDA"). Capitalized terms are defined in Section 5. Data Provider and you agree as follows:
4
+
5
+ 1. **Provision of the Data**
6
+
7
+ 1.1. You may use, modify, and distribute the Data made available to you by the Data Provider under this O-UDA if you follow the O-UDA's terms.
8
+
9
+ 1.2. Data Provider will not sue you or any Downstream Recipient for any claim arising out of the use, modification, or distribution of the Data provided you meet the terms of the O-UDA.
10
+
11
+ 1.3 This O-UDA does not restrict your use, modification, or distribution of any portions of the Data that are in the public domain or that may be used, modified, or distributed under any other legal exception or limitation.
12
+
13
+ 2. **No Restrictions on Use or Results**
14
+
15
+ 2.1. The O-UDA does not impose any restriction with respect to:
16
+
17
+ 2.1.1. the use or modification of Data; or
18
+
19
+ 2.1.2. the use, modification, or distribution of Results.
20
+
21
+ 3. **Redistribution of Data**
22
+
23
+ 3.1. You may redistribute the Data under terms of your choice, so long as:
24
+
25
+ 3.1.1. You include with any Data you redistribute all credit or attribution information that you received with the Data, and your terms require any Downstream Recipient to do the same; and
26
+
27
+ 3.1.2. Your terms include a warranty disclaimer and limitation of liability for Upstream Data Providers at least as broad as those contained in Section 4.2 and 4.3 of the O-UDA.
28
+
29
+ 4. **No Warranty, Limitation of Liability**
30
+
31
+ 4.1. Data Provider does not represent or warrant that it has any rights whatsoever in the Data.
32
+
33
+ 4.2. THE DATA IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
34
+
35
+ 4.3. NEITHER DATA PROVIDER NOR ANY UPSTREAM DATA PROVIDER SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE DATA OR RESULTS, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
36
+
37
+ 5. **Definitions**
38
+
39
+ 5.1. "Data" means the information made available to you under this O-UDA.
40
+
41
+ 5.2. "Data Provider" means the source of the Data.
42
+
43
+ 5.3. "Downstream Recipient" means any person or persons who receives the Data directly or indirectly from you in accordance with the O-UDA.
44
+
45
+ 5.4. "Result" means the research, technical or other result which is produced through the use of Data.
46
+
47
+ 5.5. "Upstream Data Provider" means the Data Provider or any person who receives Data directly or indirectly from the Data Provider under the O-UDA and in turn redistributes it.
README.md CHANGED
@@ -1,3 +1,98 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SpaceV 1B
2
+
3
+ SpaceV, initially published by Microsoft, is arguably the best dataset for large-scale Vector Search benchmarks.
4
+ It's large enough to stress-test indexing engines running across hundreds of CPU or GPU cores, significantly larger than the traditional [Big-ANN](https://big-ann-benchmarks.com/), which generally operates on just 10 million vectors.
5
+ It provides vectors in an 8-bit integral form, empirically optimal for large-scale Information Retrieval and Recommender Systems, capable of leveraging hardware-accelerated quantized dot-products and other SIMD assembly extensions from AVX-512VNNI on x86 and SVE2 on Arm.
6
+
7
+ The [original dataset](https://github.com/microsoft/SPTAG/tree/main/datasets/SPACEV1B) was fragmented into 4 GB, which required additional preprocessing before it could be used.
8
+ This adaptation re-distributes it under the same [O-UDA license](https://github.com/microsoft/SPTAG/blob/main/datasets/SPACEV1B/LICENSE), but in a more accessible format, and augmented with more metadata.
9
+ The project description is hosted on [GitHub](https://github.com/ashvardanian/SpaceV) under `ashvardanian/SpaceV`.
10
+ The primary merged dataset is hosted on [AWS S3](https://bigger-ann.s3.amazonaws.com/) under `s3://bigger-ann/spacev-1b/`.
11
+ The smaller subsample is hosted on [HuggingFace](https://huggingface.co/datasets/unum-cloud/ann-spacev-100m) under `unum-cloud/ann-spacev-100m`.
12
+
13
+ ## Structure
14
+
15
+ All files are binary matrices in row-major order, prepended by two 32-bit unsigned integers - the number of rows and columns.
16
+
17
+ - `base.1B.i8bin` - 1.4e9 vectors, each as a 100x 8-bit signed integers. (131 GB)
18
+ - `query.30K.i8bin` - 3e4 search queries vectors, each as a 100x 8-bit signed integers. (3 MB)
19
+ - `groundtruth.30K.i32bin` - 3e4 ground truth outputs, as a 100x 32-bit integer row IDs. (12 MB)
20
+ - `groundtruth.30K.f32bin` - Euclidean distances to each of the 3e4 by 100x search results. (12 MB)
21
+
22
+ A smaller 100M subset:
23
+
24
+ - `base.100M.i8bin` - 1e8 vectors subset, each as a 100x 8-bit signed integers. (9 GB)
25
+ - `ids.100M.i32bin` - 1e8 vector IDs subset, each as a 100x 32-bit integer row IDs. (380 MB)
26
+
27
+ ## Access
28
+
29
+ The full dataset is stored on AWS S3, as individual files exceed the limitations of GitHub LFS and Hugging Face Datasets platform.
30
+
31
+ ```bash
32
+ $ user@host$ aws s3 ls s3://bigger-ann/spacev-1b/
33
+
34
+ > YYYY-MM-dd HH:mm:ss 140202072008 base.1B.i8bin
35
+ > YYYY-MM-dd HH:mm:ss 11726408 groundtruth.30K.f32bin
36
+ > YYYY-MM-dd HH:mm:ss 11726408 groundtruth.30K.i32bin
37
+ > YYYY-MM-dd HH:mm:ss 2931608 query.30K.i8bin
38
+ ```
39
+
40
+ To download the dataset into a local directory, use the following command:
41
+
42
+ ```bash
43
+ mkdir -p datasets/spacev-1b/
44
+ aws s3 cp s3://bigger-ann/spacev-1b/ datasets/spacev-1b/ --recursive
45
+ ```
46
+
47
+ For convenience, a smaller 100M subset is also available on HuggingFace via LFS:
48
+
49
+ ```bash
50
+ mkdir -p datasets/spacev-100m/ && \
51
+ wget -nc https://huggingface.co/datasets/unum-cloud/ann-spacev-100m/resolve/main/ids.100m.i32bin -P datasets/spacev-100m/ &&
52
+ wget -nc https://huggingface.co/datasets/unum-cloud/ann-spacev-100m/resolve/main/base.100m.i8bin -P datasets/spacev-100m/ &&
53
+ wget -nc https://huggingface.co/datasets/unum-cloud/ann-spacev-100m/resolve/main/query.30K.i8bin -P datasets/spacev-100m/ &&
54
+ wget -nc https://huggingface.co/datasets/unum-cloud/ann-spacev-100m/resolve/main/groundtruth.30K.i32bin -P datasets/spacev-100m/ &&
55
+ wget -nc https://huggingface.co/datasets/unum-cloud/ann-spacev-100m/resolve/main/groundtruth.30K.f32bin -P datasets/spacev-100m/
56
+ ```
57
+
58
+ ## Usage
59
+
60
+ The dataset can be loaded with the following Python code, "viewing" the data to avoid pulling everything into memory:
61
+
62
+ ```python
63
+ from usearch.io import load_matrix
64
+
65
+ base_view = load_matrix("base.1B.i8bin", dtype=np.int8, view=True)
66
+ queries = load_matrix("query.30K.i8bin", dtype=np.int8)
67
+ matches = load_matrix("groundtruth.30K.i32bin", dtype=np.int32)
68
+ distances = load_matrix("groundtruth.30K.f32bin", dtype=np.float32)
69
+ ```
70
+
71
+ To construct an index and check the recall against ground truth:
72
+
73
+ ```python
74
+ from usearch.index import Index, BatchMatches
75
+
76
+ index = Index(ndim=100, metric="l2sq", dtype="i8")
77
+ index.add(None, base_view) # Use incremental keys from 0 to len(base_view)
78
+ matches: BatchMatches = index.search(queries)
79
+ ```
80
+
81
+ On a modern high-core-count system, constructing the index can be performed at 150'000 vectors per second and will take around 3 hours.
82
+ To switch to a smaller dataset, replace the file paths with the corresponding 100M versions:
83
+
84
+ ```python
85
+ from usearch.io import load_matrix
86
+
87
+ base = load_matrix("base.100M.i8bin", dtype=np.int8)
88
+ ids = load_matrix("ids.100M.i32bin", dtype=np.int32)
89
+ queries = load_matrix("query.30K.i8bin", dtype=np.int8)
90
+ matches = load_matrix("groundtruth.30K.i32bin", dtype=np.int32)
91
+ distances = load_matrix("groundtruth.30K.f32bin", dtype=np.float32)
92
+
93
+ from usearch.index import Index, BatchMatches
94
+
95
+ index = Index(ndim=100, metric="l2sq", dtype="i8")
96
+ index.add(ids, base)
97
+ matches: BatchMatches = index.search(queries)
98
+ ```