Update README.md
Browse files
README.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
## Multi-Vector Search Datasets
|
| 2 |
|
| 3 |
-
The datasets listed below are used in the [Multi-Vector HNSW](https://github.com/habedi/multi-vector-hnsw) project for benchmarking multi-vector approximate nearest neighbor search.
|
| 4 |
|
| 5 |
---
|
| 6 |
|
|
@@ -11,11 +11,11 @@ Source: [habedi/stack-exchange-dataset](https://huggingface.co/datasets/habedi/s
|
|
| 11 |
Each row contains:
|
| 12 |
- `id`: unique post ID
|
| 13 |
- `title`: the post title (text)
|
| 14 |
-
- `body`: the main body content (
|
| 15 |
- `tags`: associated tags (text)
|
| 16 |
-
- `embedding`: list of three 768-dimensional vectors for `[title, body, tags]`
|
| 17 |
|
| 18 |
-
Text embeddings generated using `all-mpnet-base-v2`
|
| 19 |
|
| 20 |
| Index | Dataset | Size |
|
| 21 |
|-------|---------------------|--------:|
|
|
@@ -31,13 +31,14 @@ Source: [habedi/flickr-8k-dataset-clean](https://www.kaggle.com/datasets/habedi/
|
|
| 31 |
|
| 32 |
Each row contains:
|
| 33 |
- `id`: image filename
|
| 34 |
-
- `captions`: list of 5 human-written captions
|
| 35 |
-
- `image`: raw image (
|
| 36 |
-
- `embedding`: list of six 768-dimensional vectors: 5 for captions, 1 for image
|
| 37 |
|
| 38 |
-
Caption embeddings were generated using
|
| 39 |
-
Image embeddings were created using the `openai/clip-vit-base-patch32` model via the HuggingFace Transformers library.
|
| 40 |
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
|
|
|
|
|
|
|
|
|
| 1 |
## Multi-Vector Search Datasets
|
| 2 |
|
| 3 |
+
The datasets listed below are used in the [Multi-Vector HNSW](https://github.com/habedi/multi-vector-hnsw) project for testing and benchmarking multi-vector approximate nearest neighbor search algorithms and their implementations.
|
| 4 |
|
| 5 |
---
|
| 6 |
|
|
|
|
| 11 |
Each row contains:
|
| 12 |
- `id`: unique post ID
|
| 13 |
- `title`: the post title (text)
|
| 14 |
+
- `body`: the main body content (HTML tags removed)
|
| 15 |
- `tags`: associated tags (text)
|
| 16 |
+
- `embedding`: a list of three 768-dimensional vectors for `[title, body, tags]`
|
| 17 |
|
| 18 |
+
Text embeddings were generated using [`all-mpnet-base-v2`](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) from Sentence Transformers.
|
| 19 |
|
| 20 |
| Index | Dataset | Size |
|
| 21 |
|-------|---------------------|--------:|
|
|
|
|
| 31 |
|
| 32 |
Each row contains:
|
| 33 |
- `id`: image filename
|
| 34 |
+
- `captions`: a list of 5 human-written captions
|
| 35 |
+
- `image`: raw image data (JPEG format)
|
| 36 |
+
- `embedding`: a list of six 768-dimensional vectors: 5 for captions, 1 for the image
|
| 37 |
|
| 38 |
+
Caption embeddings were generated using [`all-mpnet-base-v2`](https://huggingface.co/sentence-transformers/all-mpnet-base-v2).
|
|
|
|
| 39 |
|
| 40 |
+
Image embeddings were generated using [`openai/clip-vit-base-patch32`](https://huggingface.co/openai/clip-vit-base-patch32) via the Hugging Face Transformers library.
|
| 41 |
+
|
| 42 |
+
| Index | Dataset | Size |
|
| 43 |
+
|-------|------------------------------|-------:|
|
| 44 |
+
| 1 | Flickr8k (captions + image) | 8,091 |
|