Datasets:
The dataset viewer should be available soon. Please retry later.
Wikidata Entity Embeddings 0.2
Dataset Summary
Wikidata Entity Embeddings is a dataset of embedding vectors for Wikidata entities. Each vector represents a Wikidata item (Q...) or property (P...) based on textual information extracted from Wikidata.
The dataset is part of the Wikidata Embedding Project, an initiative led by Wikimedia Deutschland in collaboration with Jina AI and IBM DataStax. The project provides a publicly accessible Wikidata Vector Database to enable semantic search and support the mission-aligned, open-source AI community in building applications on top of Wikidata.
A publicly accessible API is available for querying the vector database containing these embeddings:
- API: wd-vectordb.wmcloud.org
- Documentation: wd-vectordb.wmcloud.org/docs
- Project Page: wikidata.org/wiki/Wikidata:Vector_Database
Additional details about the embedding pipeline and infrastructure are available on the project page.
Dataset Structure
Dataset Statistics
The dataset contains:
- 44 million vectors
- 23 million unique Wikidata entities (entities linked to at least one Wikipedia page)
- 512-dimensional embeddings
- Languages: English (en), French (fr), German (de), Arabic (ar)
| Language | Vectors | Unique WD Items |
|---|---|---|
| English | 21,127,781 | 21,094,882 |
| French | 10,662,599 | 10,631,982 |
| German | 9,793,965 | 9,773,883 |
| Arabic | 2,986,814 | 2,974,632 |
Embeddings are generated separately for each language using textual representations of entities. Therefore, the same entity may appear multiple times with different language-specific embeddings.
Additional languages will be added in future releases.
Data Fields
The dataset is organized by language and stored as Parquet shards: data/<lang>/shard-000001.parquet
Each shard contains the following columns:
| Field | Type | Description |
|---|---|---|
id |
string | Unique row identifier |
vector |
string | Base64-encoded float32 embedding vector |
lang |
string | Language used to generate the embedding |
wdid |
string | Wikidata identifier (QID or PID) |
How to Decode Vectors
The vector column is encoded as base64 representations of little-endian float32 arrays.
Example encoding and decoding:
from datasets import load_dataset
import base64
import numpy as np
LANGUAGE = 'en'
def encode_vector(vector_arr: np.ndarray) -> str:
binary_data = vector_arr.tobytes()
return base64.b64encode(binary_data).decode('utf8')
def decode_vector(vector_b64: str) -> np.ndarray:
binary_data = base64.b64decode(vector_b64)
return np.frombuffer(binary_data, dtype="<f4")
ds = load_dataset(
"philippesaade/Wikidata_Vectors_0.2",
data_files=f"data/{LANGUAGE}/*.parquet",
streaming=True,
)
# Iterate over the dataset:
# for example in ds:
# vector = decode_vector(example["vector"])
# print("id:", example["id"])
# print("wdid:", example["wdid"])
# print("lang:", example["lang"])
# print("vector shape:", vector.shape)
# print("first 5 values:", vector[:5])
Dataset Creation
Source Data
The dataset is derived from Wikidata, the world’s largest free and open knowledge graph that can be read and edited by both humans and machines. It provides structured data for Wikimedia projects such as Wikipedia, Wikisource, WikiCommons, WikiCite, and Wikivoyage, as well as applications and services outside Wikimedia. Launched in 2012 by Wikimedia Deutschland and the Wikimedia Foundation, Wikidata has grown into the world’s largest collaboratively edited knowledge graph, containing over 112 million structured data objects. It is maintained by a community of 24,000+ monthly contributors and is available in over 300 languages.
Entity Selection
Entities are included only if they satisfy the following criteria:
- The entity has at least one Wikipedia sitelink.
- The entity has a label in the target language (or in the multilingual
mullanguage). - The entity has either:
- A description in the target language (or in ‘mul’), or
- At least one statement associated with the entity.
- Our team has the capacity to prioritise, extract, transform, and load the specified language.
Vector Generation
The embeddings were computed using jina-embeddings-v3, a multilingual embedding model from Jina AI. For this dataset, vectors were generated with:
- task:
retrieval.passage - embedding size:
512
For each entity, a textual representation was constructed from its label, description, and serialised statements and encoded into a vector. These textual representations were generated using a pipeline available via the Wikidata Textifier API (docs). Further details about the embedding pipeline, text construction, and infrastructure used to generate the vectors are available on the project page.
Limitations
- The embedding model is not knowledge graph–native. Embeddings are generated from flattened, textual representations of entities rather than directly from the graph structure of Wikidata. This implies that structural relationships in the knowledge graph are captured only indirectly through their textual representations.
- Only entities with at least one Wikipedia sitelink and sufficient textual information are included (see above).
- Data updates are limited to the September 18, 2024, Wikidata Data Dump, and changes after this date are not reflected.
- Downloads last month
- 1,619