wikipedia_en / README.md
huoju's picture
Update README.md
a6dc83f verified
|
raw
history blame
5.71 kB
---
license: apache-2.0
task_categories:
- feature-extraction
language:
- en
size_categories:
- 10M<n<100M
---
# `wikipedia_en`
This is a curated Wikipedia English dataset for use with the [Chipmunk](https://github.com/Intelligent-Internet/Chipmunk) project.
## Dataset Details
### Dataset Description
This dataset comprises a curated Wikipedia English pages. Data sourced directly from the official English Wikipedia database dump. We extract the pages, chunk them into smaller pieces, and embed them using [Snowflake/snowflake-arctic-embed-m-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v2.0). All vector embeddings are 16-bit half-precision vectors optimized for `cosine` indexing with [vectorchord](https://github.com/tensorchord/vectorchord).
### Dataset Sources
Based on the [wikipedia dumps](https://dumps.wikimedia.org/). Please check this page for the [LICENSE](https://dumps.wikimedia.org/legal.html) of the page data.
## Dataset Structure
1. Metadata Table
- id: A unique identifier for the page.
- revid: The revision ID of the page.
- url: The URL of the page.
- title: The title of the page.
- origin_storage_id: The storage ID of the original page.
- created_at: The creation time of the page.
- updated_at: The update time of the page.
- ignored: Whether the page is ignored.
2. Chunking Table
- id: A unique identifier for the chunk.
- title: The title of the page.
- url: The URL of the page.
- snapshot: The snapshot of the page.
- source_id: The source ID of the page.
- chunk_index: The index of the chunk.
- chunk_text: The text of the chunk.
- vector: The vector embedding of the chunk.
- created_at: The creation time of the chunk.
- updated_at: The update time of the chunk.
## Uses
This dataset is available for a wide range of applications.
Here is a demo of how to use the dataset with [Chipmunk](https://github.com/Intelligent-Internet/Chipmunk).
### Create the metadata and chunking tables in PostgreSQL
```sql
CREATE TABLE IF NOT EXISTS ts_wikipedia_en (
id BIGSERIAL PRIMARY KEY,
revid BIGINT NOT NULL,
url VARCHAR NOT NULL,
title VARCHAR NOT NULL DEFAULT '',
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
ignored BOOLEAN NOT NULL DEFAULT FALSE
);
CREATE TABLE IF NOT EXISTS ts_wikipedia_en_embed (
id BIGSERIAL PRIMARY KEY,
title VARCHAR NOT NULL,
url VARCHAR NOT NULL,
snapshot VARCHAR NOT NULL,
chunk_index BIGINT NOT NULL,
chunk_text VARCHAR NOT NULL,
source_id BIGINT NOT NULL,
vector halfvec(768) DEFAULT NULL,
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
);
```
### Load csv files to database
1. Load the dataset from local file system to a remote PostgreSQL server:
```sql
\copy ts_wikipedia_en FROM 'data/meta/ts_wikipedia_en.csv' CSV HEADER;
\copy ts_wikipedia_en_embed FROM 'data/chunks/0000000.csv' CSV HEADER;
\copy ts_wikipedia_en_embed FROM 'data/chunks/0000001.csv' CSV HEADER;
\copy ts_wikipedia_en_embed FROM 'data/chunks/0000002.csv' CSV HEADER;
...
```
2. Load the dataset from the PostgreSQL server's file system:
```sql
copy ts_wikipedia_en FROM 'data/meta/ts_wikipedia_en.csv' CSV HEADER;
copy ts_wikipedia_en_embed FROM 'data/chunks/0000000.csv' CSV HEADER;
copy ts_wikipedia_en_embed FROM 'data/chunks/0000001.csv' CSV HEADER;
copy ts_wikipedia_en_embed FROM 'data/chunks/0000002.csv' CSV HEADER;
...
```
### Create Indexes
You need to create the following indexes for the best performance.
The `vector` column is a halfvec(768) column, which is a 16-bit half-precision vector optimized for `cosine` indexing with [vectorchord](https://github.com/tensorchord/vectorchord). You can get more information about the vector index from the [vectorchord](https://docs.vectorchord.ai/vectorchord/usage/indexing.html) documentation.
1. Create the metadata table index:
```sql
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_revid_index ON ts_wikipedia_en (revid);
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_url_index ON ts_wikipedia_en (url);
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_title_index ON ts_wikipedia_en (title);
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_origin_storage_id_index ON ts_wikipedia_en (origin_storage_id);
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_ignored_index ON ts_wikipedia_en (ignored);
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_created_at_index ON ts_wikipedia_en (created_at);
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_updated_at_index ON ts_wikipedia_en (updated_at);
```
2. Create the chunking table index:
```sql
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_embed_source_id_index ON ts_wikipedia_en_embed (source_id);
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_embed_chunk_index_index ON ts_wikipedia_en_embed (chunk_index);
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_embed_chunk_text_index ON ts_wikipedia_en_embed USING bm25 (id, title, chunk_text) WITH (key_field='id');
CREATE UNIQUE INDEX IF NOT EXISTS ts_wikipedia_en_embed_source_index ON ts_wikipedia_en_embed (source_id, chunk_index);
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_embed_vector_index ON ts_wikipedia_en_embed USING vchordrq (vector halfvec_cosine_ops) WITH (options = $$
[build.internal]
lists = [20000]
build_threads = 6
spherical_centroids = true
$$);
CREATE INDEX IF NOT EXISTS ts_wikipedia_en_embed_vector_null_index ON ts_wikipedia_en_embed (vector) WHERE vector IS NULL;
SELECT vchordrq_prewarm('ts_wikipedia_en_embed_vector_index');
```
### Query with Chipmunk
Click this link to learn how to query the dataset with [Chipmunk](https://github.com/Intelligent-Internet/Chipmunk).