Files changed (1) hide show
  1. README.md +142 -143
README.md CHANGED
@@ -1,143 +1,142 @@
1
- ---
2
- license: apache-2.0
3
- task_categories:
4
- - feature-extraction
5
- language:
6
- - en
7
- size_categories:
8
- - 10M<n<100M
9
- ---
10
-
11
- # `wikipedia_en`
12
-
13
- This is a curated Wikipedia English dataset for use with the [Chipmunk](https://github.com/Intelligent-Internet/Chipmunk) project.
14
-
15
- ## Dataset Details
16
-
17
- ### Dataset Description
18
-
19
- This dataset comprises a curated Wikipedia English pages. Data sourced directly from the official English Wikipedia database dump. We extract the pages, chunk them into smaller pieces, and embed them using [Snowflake/snowflake-arctic-embed-m-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v2.0). All vector embeddings are 16-bit half-precision vectors optimized for `cosine` indexing with [vectorchord](https://github.com/tensorchord/vectorchord).
20
-
21
- ### Dataset Sources
22
-
23
- Based on the [wikipedia dumps](https://dumps.wikimedia.org/). Please check this page for the [LICENSE](https://dumps.wikimedia.org/legal.html) of the page data.
24
-
25
- ## Dataset Structure
26
-
27
- 1. Metadata Table
28
-
29
- - id: A unique identifier for the page.
30
- - revid: The revision ID of the page.
31
- - url: The URL of the page.
32
- - title: The title of the page.
33
- - origin_storage_id: The storage ID of the original page.
34
- - ignored: Whether the page is ignored.
35
- - created_at: The creation time of the page.
36
- - updated_at: The update time of the page.
37
-
38
- 2. Chunking Table
39
-
40
- - id: A unique identifier for the chunk.
41
- - title: The title of the page.
42
- - url: The URL of the page.
43
- - snapshot: The snapshot of the page.
44
- - source_id: The source ID of the page.
45
- - chunk_index: The index of the chunk.
46
- - chunk_text: The text of the chunk.
47
- - vector: The vector embedding of the chunk.
48
- - created_at: The creation time of the chunk.
49
- - updated_at: The update time of the chunk.
50
-
51
- ## Uses
52
-
53
- This dataset is available for a wide range of applications.
54
-
55
- Here is a demo of how to use the dataset with [Chipmunk](https://github.com/Intelligent-Internet/Chipmunk).
56
-
57
- ### Create the metadata and chunking tables in PostgreSQL
58
-
59
- ```sql
60
- CREATE TABLE IF NOT EXISTS ts_wikipedia_en (
61
- id BIGSERIAL PRIMARY KEY,
62
- revid BIGINT NOT NULL,
63
- url VARCHAR NOT NULL,
64
- title VARCHAR NOT NULL DEFAULT '',
65
- origin_storage_id VARCHAR(1024) NOT NULL DEFAULT '',
66
- ignored BOOLEAN NOT NULL DEFAULT FALSE,
67
- created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
68
- updated_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
69
- );
70
-
71
- CREATE TABLE IF NOT EXISTS ts_wikipedia_en_embed (
72
- id BIGSERIAL PRIMARY KEY,
73
- title VARCHAR NOT NULL,
74
- url VARCHAR NOT NULL,
75
- snapshot VARCHAR NOT NULL,
76
- source_id BIGINT NOT NULL,
77
- chunk_index BIGINT NOT NULL,
78
- chunk_text VARCHAR NOT NULL,
79
- vector halfvec(768) DEFAULT NULL,
80
- created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
81
- updated_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
82
- );
83
- ```
84
-
85
- ### Load csv files to database
86
-
87
- 1. Load the dataset from local file system to a remote PostgreSQL server:
88
-
89
- ```sql
90
- \copy ts_wikipedia_en FROM 'data/meta/ts_wikipedia_en.csv' CSV HEADER;
91
- \copy ts_wikipedia_en_embed FROM 'data/chunks/ts_wikipedia_en_embed_0000000.csv' CSV HEADER;
92
- \copy ts_wikipedia_en_embed FROM 'data/chunks/ts_wikipedia_en_embed_0000001.csv' CSV HEADER;
93
- \copy ts_wikipedia_en_embed FROM 'data/chunks/ts_wikipedia_en_embed_0000002.csv' CSV HEADER;
94
- ...
95
- ```
96
-
97
- 2. Load the dataset from the PostgreSQL server's file system:
98
-
99
- ```sql
100
- copy ts_wikipedia_en FROM 'data/meta/ts_wikipedia_en.csv' CSV HEADER;
101
- copy ts_wikipedia_en_embed FROM 'data/chunks/ts_wikipedia_en_embed_0000000.csv' CSV HEADER;
102
- copy ts_wikipedia_en_embed FROM 'data/chunks/ts_wikipedia_en_embed_0000001.csv' CSV HEADER;
103
- copy ts_wikipedia_en_embed FROM 'data/chunks/ts_wikipedia_en_embed_0000002.csv' CSV HEADER;
104
- ...
105
- ```
106
-
107
- ### Create Indexes
108
-
109
- You need to create the following indexes for the best performance.
110
-
111
- The `vector` column is a halfvec(768) column, which is a 16-bit half-precision vector optimized for `cosine` indexing with [vectorchord](https://github.com/tensorchord/vectorchord). You can get more information about the vector index from the [vectorchord](https://docs.vectorchord.ai/vectorchord/usage/indexing.html) documentation.
112
-
113
- 1. Create the metadata table index:
114
-
115
- ```sql
116
- CREATE INDEX IF NOT EXISTS ts_wikipedia_en_revid_index ON ts_wikipedia_en (revid);
117
- CREATE INDEX IF NOT EXISTS ts_wikipedia_en_url_index ON ts_wikipedia_en (url);
118
- CREATE INDEX IF NOT EXISTS ts_wikipedia_en_title_index ON ts_wikipedia_en (title);
119
- CREATE INDEX IF NOT EXISTS ts_wikipedia_en_origin_storage_id_index ON ts_wikipedia_en (origin_storage_id);
120
- CREATE INDEX IF NOT EXISTS ts_wikipedia_en_ignored_index ON ts_wikipedia_en (ignored);
121
- CREATE INDEX IF NOT EXISTS ts_wikipedia_en_created_at_index ON ts_wikipedia_en (created_at);
122
- CREATE INDEX IF NOT EXISTS ts_wikipedia_en_updated_at_index ON ts_wikipedia_en (updated_at);
123
- ```
124
- 2. Create the chunking table index:
125
-
126
- ```sql
127
- CREATE INDEX IF NOT EXISTS ts_wikipedia_en_embed_source_id_index ON ts_wikipedia_en_embed (source_id);
128
- CREATE INDEX IF NOT EXISTS ts_wikipedia_en_embed_chunk_index_index ON ts_wikipedia_en_embed (chunk_index);
129
- CREATE INDEX IF NOT EXISTS ts_wikipedia_en_embed_chunk_text_index ON ts_wikipedia_en_embed USING bm25 (id, title, chunk_text) WITH (key_field='id');
130
- CREATE UNIQUE INDEX IF NOT EXISTS ts_wikipedia_en_embed_source_index ON ts_wikipedia_en_embed (source_id, chunk_index);
131
- CREATE INDEX IF NOT EXISTS ts_wikipedia_en_embed_vector_index ON ts_wikipedia_en_embed USING vchordrq (vector halfvec_cosine_ops) WITH (options = $$
132
- [build.internal]
133
- lists = [20000]
134
- build_threads = 6
135
- spherical_centroids = true
136
- $$);
137
- CREATE INDEX IF NOT EXISTS ts_wikipedia_en_embed_vector_null_index ON ts_wikipedia_en_embed (vector) WHERE vector IS NULL;
138
- SELECT vchordrq_prewarm('ts_wikipedia_en_embed_vector_index');
139
- ```
140
-
141
- ### Query with Chipmunk
142
-
143
- Click this link to learn how to query the dataset with [Chipmunk](https://github.com/Intelligent-Internet/Chipmunk).
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - feature-extraction
5
+ language:
6
+ - en
7
+ size_categories:
8
+ - 10M<n<100M
9
+ ---
10
+
11
+ # `wikipedia_en`
12
+
13
+ This is a curated Wikipedia English dataset for use with the [Chipmunk](https://github.com/Intelligent-Internet/Chipmunk) project.
14
+
15
+ ## Dataset Details
16
+
17
+ ### Dataset Description
18
+
19
+ This dataset comprises a curated Wikipedia English pages. Data sourced directly from the official English Wikipedia database dump. We extract the pages, chunk them into smaller pieces, and embed them using [Snowflake/snowflake-arctic-embed-m-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v2.0). All vector embeddings are 16-bit half-precision vectors optimized for `cosine` indexing with [vectorchord](https://github.com/tensorchord/vectorchord).
20
+
21
+ ### Dataset Sources
22
+
23
+ Based on the [wikipedia dumps](https://dumps.wikimedia.org/). Please check this page for the [LICENSE](https://dumps.wikimedia.org/legal.html) of the page data.
24
+
25
+ ## Dataset Structure
26
+
27
+ 1. Metadata Table
28
+
29
+ - id: A unique identifier for the page.
30
+ - revid: The revision ID of the page.
31
+ - url: The URL of the page.
32
+ - title: The title of the page.
33
+ - origin_storage_id: The storage ID of the original page.
34
+ - created_at: The creation time of the page.
35
+ - updated_at: The update time of the page.
36
+ - ignored: Whether the page is ignored.
37
+
38
+ 2. Chunking Table
39
+
40
+ - id: A unique identifier for the chunk.
41
+ - title: The title of the page.
42
+ - url: The URL of the page.
43
+ - snapshot: The snapshot of the page.
44
+ - source_id: The source ID of the page.
45
+ - chunk_index: The index of the chunk.
46
+ - chunk_text: The text of the chunk.
47
+ - vector: The vector embedding of the chunk.
48
+ - created_at: The creation time of the chunk.
49
+ - updated_at: The update time of the chunk.
50
+
51
+ ## Uses
52
+
53
+ This dataset is available for a wide range of applications.
54
+
55
+ Here is a demo of how to use the dataset with [Chipmunk](https://github.com/Intelligent-Internet/Chipmunk).
56
+
57
+ ### Create the metadata and chunking tables in PostgreSQL
58
+
59
+ ```sql
60
+ CREATE TABLE IF NOT EXISTS ts_wikipedia_en (
61
+ id BIGSERIAL PRIMARY KEY,
62
+ revid BIGINT NOT NULL,
63
+ url VARCHAR NOT NULL,
64
+ title VARCHAR NOT NULL DEFAULT '',
65
+ created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
66
+ updated_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
67
+ ignored BOOLEAN NOT NULL DEFAULT FALSE
68
+ );
69
+
70
+ CREATE TABLE IF NOT EXISTS ts_wikipedia_en_embed (
71
+ id BIGSERIAL PRIMARY KEY,
72
+ title VARCHAR NOT NULL,
73
+ url VARCHAR NOT NULL,
74
+ snapshot VARCHAR NOT NULL,
75
+ source_id BIGINT NOT NULL,
76
+ chunk_index BIGINT NOT NULL,
77
+ chunk_text VARCHAR NOT NULL,
78
+ vector halfvec(768) DEFAULT NULL,
79
+ created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
80
+ updated_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
81
+ );
82
+ ```
83
+
84
+ ### Load csv files to database
85
+
86
+ 1. Load the dataset from local file system to a remote PostgreSQL server:
87
+
88
+ ```sql
89
+ \copy ts_wikipedia_en FROM 'data/meta/ts_wikipedia_en.csv' CSV HEADER;
90
+ \copy ts_wikipedia_en_embed FROM 'data/chunks/ts_wikipedia_en_embed_0000000.csv' CSV HEADER;
91
+ \copy ts_wikipedia_en_embed FROM 'data/chunks/ts_wikipedia_en_embed_0000001.csv' CSV HEADER;
92
+ \copy ts_wikipedia_en_embed FROM 'data/chunks/ts_wikipedia_en_embed_0000002.csv' CSV HEADER;
93
+ ...
94
+ ```
95
+
96
+ 2. Load the dataset from the PostgreSQL server's file system:
97
+
98
+ ```sql
99
+ copy ts_wikipedia_en FROM 'data/meta/ts_wikipedia_en.csv' CSV HEADER;
100
+ copy ts_wikipedia_en_embed FROM 'data/chunks/ts_wikipedia_en_embed_0000000.csv' CSV HEADER;
101
+ copy ts_wikipedia_en_embed FROM 'data/chunks/ts_wikipedia_en_embed_0000001.csv' CSV HEADER;
102
+ copy ts_wikipedia_en_embed FROM 'data/chunks/ts_wikipedia_en_embed_0000002.csv' CSV HEADER;
103
+ ...
104
+ ```
105
+
106
+ ### Create Indexes
107
+
108
+ You need to create the following indexes for the best performance.
109
+
110
+ The `vector` column is a halfvec(768) column, which is a 16-bit half-precision vector optimized for `cosine` indexing with [vectorchord](https://github.com/tensorchord/vectorchord). You can get more information about the vector index from the [vectorchord](https://docs.vectorchord.ai/vectorchord/usage/indexing.html) documentation.
111
+
112
+ 1. Create the metadata table index:
113
+
114
+ ```sql
115
+ CREATE INDEX IF NOT EXISTS ts_wikipedia_en_revid_index ON ts_wikipedia_en (revid);
116
+ CREATE INDEX IF NOT EXISTS ts_wikipedia_en_url_index ON ts_wikipedia_en (url);
117
+ CREATE INDEX IF NOT EXISTS ts_wikipedia_en_title_index ON ts_wikipedia_en (title);
118
+ CREATE INDEX IF NOT EXISTS ts_wikipedia_en_origin_storage_id_index ON ts_wikipedia_en (origin_storage_id);
119
+ CREATE INDEX IF NOT EXISTS ts_wikipedia_en_ignored_index ON ts_wikipedia_en (ignored);
120
+ CREATE INDEX IF NOT EXISTS ts_wikipedia_en_created_at_index ON ts_wikipedia_en (created_at);
121
+ CREATE INDEX IF NOT EXISTS ts_wikipedia_en_updated_at_index ON ts_wikipedia_en (updated_at);
122
+ ```
123
+ 2. Create the chunking table index:
124
+
125
+ ```sql
126
+ CREATE INDEX IF NOT EXISTS ts_wikipedia_en_embed_source_id_index ON ts_wikipedia_en_embed (source_id);
127
+ CREATE INDEX IF NOT EXISTS ts_wikipedia_en_embed_chunk_index_index ON ts_wikipedia_en_embed (chunk_index);
128
+ CREATE INDEX IF NOT EXISTS ts_wikipedia_en_embed_chunk_text_index ON ts_wikipedia_en_embed USING bm25 (id, title, chunk_text) WITH (key_field='id');
129
+ CREATE UNIQUE INDEX IF NOT EXISTS ts_wikipedia_en_embed_source_index ON ts_wikipedia_en_embed (source_id, chunk_index);
130
+ CREATE INDEX IF NOT EXISTS ts_wikipedia_en_embed_vector_index ON ts_wikipedia_en_embed USING vchordrq (vector halfvec_cosine_ops) WITH (options = $$
131
+ [build.internal]
132
+ lists = [20000]
133
+ build_threads = 6
134
+ spherical_centroids = true
135
+ $$);
136
+ CREATE INDEX IF NOT EXISTS ts_wikipedia_en_embed_vector_null_index ON ts_wikipedia_en_embed (vector) WHERE vector IS NULL;
137
+ SELECT vchordrq_prewarm('ts_wikipedia_en_embed_vector_index');
138
+ ```
139
+
140
+ ### Query with Chipmunk
141
+
142
+ Click this link to learn how to query the dataset with [Chipmunk](https://github.com/Intelligent-Internet/Chipmunk).