philippesaade commited on
Commit
6ee732f
·
verified ·
1 Parent(s): f5dc19b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -16
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- pretty_name: Wikidata Entity Embeddings 1.0
3
  private: true
4
  license: cc0-1.0
5
  language:
@@ -29,17 +29,18 @@ models:
29
  - jinaai/jina-embeddings-v3
30
  ---
31
 
32
- # Wikidata Entity Embeddings 1.0
33
 
34
  ## Dataset Summary
35
 
36
  Wikidata Entity Embeddings is a dataset of embedding vectors for Wikidata entities. Each vector represents a Wikidata item (Q...) or property (P...) based on textual information extracted from Wikidata.
37
 
38
- The dataset is part of the **[Wikidata Embedding Project](https://www.wikidata.org/wiki/Wikidata:Embedding_Project)**, an initiative led by **Wikimedia Deutschland** in collaboration with **[Jina AI](https://jina.ai/)** and **[IBM DataStax](https://www.ibm.com/products/datastax)**. The project provides a publicly accessible **[Wikidata Vector Database](https://www.wikidata.org/wiki/Wikidata:Vector_Database)** to enable semantic search and support the open source AI community in building applications on top of Wikidata.
39
 
40
- A public API is available for querying the vector database containing these embeddings:
41
  * **API**: [wd-vectordb.wmcloud.org](https://wd-vectordb.wmcloud.org/)
42
  * **Documentation**: [wd-vectordb.wmcloud.org/docs](https://wd-vectordb.wmcloud.org/docs)
 
43
 
44
  Additional details about the embedding pipeline and infrastructure are available on the [project page](https://www.wikidata.org/wiki/Wikidata:Vector_Database).
45
 
@@ -54,7 +55,7 @@ The dataset contains:
54
  - **512-dimensional embeddings**
55
  - **Languages:** English (en), French (fr), German (de), Arabic (ar)
56
 
57
- | Language | Vectors | Unique Items |
58
  |---|---:|---:|
59
  | English | 21,127,781 | 21,094,882 |
60
  | French | 10,662,599 | 10,631,982 |
@@ -80,9 +81,12 @@ Each shard contains the following columns:
80
  The `vector` column is encoded as base64 representations of little-endian float32 arrays.
81
  Example encoding and decoding:
82
  ```python
 
83
  import base64
84
  import numpy as np
85
 
 
 
86
  def encode_vector(vector_arr: np.ndarray) -> str:
87
  binary_data = vector_arr.tobytes()
88
  return base64.b64encode(binary_data).decode('utf8')
@@ -91,10 +95,20 @@ def decode_vector(vector_b64: str) -> np.ndarray:
91
  binary_data = base64.b64decode(vector_b64)
92
  return np.frombuffer(binary_data, dtype="<f4")
93
 
94
- # Example:
95
- # arr = decode_vector(example_row["vector"])
96
- # print(arr.shape)
97
- # print(arr[:10])
 
 
 
 
 
 
 
 
 
 
98
  ```
99
 
100
  ---
@@ -102,7 +116,7 @@ def decode_vector(vector_b64: str) -> np.ndarray:
102
  ## Dataset Creation
103
 
104
  ### Source Data
105
- The dataset is derived from [Wikidata](https://www.wikidata.org/), a free and open knowledge graph that can be read and edited by both humans and machines. It provides structured data for Wikimedia projects like Wikipedia, Wikisource, and Wikivoyage as well as applications and services outside Wikimedia. Launched in 2012 by Wikimedia Deutschland, Wikidata has grown into the world’s largest collaboratively edited knowledge graph, containing over 112 million structured data objects. It is maintained by a community of 24,000+ monthly contributors and is available in over 300 languages.
106
 
107
  ### Entity Selection
108
  Entities are included only if they satisfy the following criteria:
@@ -111,8 +125,7 @@ Entities are included only if they satisfy the following criteria:
111
  3. The entity has either:
112
  - A description in the target language (or in ‘mul’), or
113
  - At least one statement associated with the entity.
114
-
115
- Because these conditions are evaluated per language, an entity may have embeddings for some languages but not others.
116
 
117
  ### Vector Generation
118
  The embeddings were computed using [jina-embeddings-v3](https://huggingface.co/jinaai/jina-embeddings-v3), a multilingual embedding model from [Jina AI](https://jina.ai/). For this dataset, vectors were generated with:
@@ -120,11 +133,11 @@ The embeddings were computed using [jina-embeddings-v3](https://huggingface.co/j
120
  * task: `retrieval.passage`
121
  * embedding size: `512`
122
 
123
- For each entity, a textual representation constructed from its label, description, and serialized statements is encoded into a vector. Further details about the embedding pipeline, text construction, and infrastructure used to generate the vectors are available on the [project page](https://www.wikidata.org/wiki/Wikidata:Vector_Database).
124
 
125
  ---
126
 
127
  ## Limitations
128
- - The embedding model is not knowledge graph–native. Embeddings are generated from textual representations of entities rather than directly from the graph structure of Wikidata, meaning structural relationships in the knowledge graph are only captured indirectly through their textual representation.
129
- - Only entities with at least one Wikipedia sitelink and sufficient textual information are included.
130
- - Data updates are limited to the September 18, 2024 dump, and changes after this date are not reflected.
 
1
  ---
2
+ pretty_name: Wikidata Entity Embeddings 0.2
3
  private: true
4
  license: cc0-1.0
5
  language:
 
29
  - jinaai/jina-embeddings-v3
30
  ---
31
 
32
+ # Wikidata Entity Embeddings 0.2
33
 
34
  ## Dataset Summary
35
 
36
  Wikidata Entity Embeddings is a dataset of embedding vectors for Wikidata entities. Each vector represents a Wikidata item (Q...) or property (P...) based on textual information extracted from Wikidata.
37
 
38
+ The dataset is part of the **[Wikidata Embedding Project](https://www.wikidata.org/wiki/Wikidata:Embedding_Project)**, an initiative led by **Wikimedia Deutschland** in collaboration with **[Jina AI](https://jina.ai/)** and **[IBM DataStax](https://www.ibm.com/products/datastax)**. The project provides a publicly accessible **[Wikidata Vector Database](https://www.wikidata.org/wiki/Wikidata:Vector_Database)** to enable semantic search and support the mission-aligned, open-source AI community in building applications on top of Wikidata.
39
 
40
+ A publicly accessible API is available for querying the vector database containing these embeddings:
41
  * **API**: [wd-vectordb.wmcloud.org](https://wd-vectordb.wmcloud.org/)
42
  * **Documentation**: [wd-vectordb.wmcloud.org/docs](https://wd-vectordb.wmcloud.org/docs)
43
+ * **Project Page**: [wikidata.org/wiki/Wikidata:Vector_Database](https://www.wikidata.org/wiki/Wikidata:Vector_Database)
44
 
45
  Additional details about the embedding pipeline and infrastructure are available on the [project page](https://www.wikidata.org/wiki/Wikidata:Vector_Database).
46
 
 
55
  - **512-dimensional embeddings**
56
  - **Languages:** English (en), French (fr), German (de), Arabic (ar)
57
 
58
+ | Language | Vectors | Unique WD Items |
59
  |---|---:|---:|
60
  | English | 21,127,781 | 21,094,882 |
61
  | French | 10,662,599 | 10,631,982 |
 
81
  The `vector` column is encoded as base64 representations of little-endian float32 arrays.
82
  Example encoding and decoding:
83
  ```python
84
+ from datasets import load_dataset
85
  import base64
86
  import numpy as np
87
 
88
+ LANGUAGE = 'en'
89
+
90
  def encode_vector(vector_arr: np.ndarray) -> str:
91
  binary_data = vector_arr.tobytes()
92
  return base64.b64encode(binary_data).decode('utf8')
 
95
  binary_data = base64.b64decode(vector_b64)
96
  return np.frombuffer(binary_data, dtype="<f4")
97
 
98
+ ds = load_dataset(
99
+ "philippesaade/Wikidata_Vectors_0.2",
100
+ data_files=f"data/{LANGUAGE}/*.parquet",
101
+ streaming=True,
102
+ )
103
+
104
+ # Iterate over the dataset:
105
+ # for example in ds:
106
+ # vector = decode_vector(example["vector"])
107
+ # print("id:", example["id"])
108
+ # print("wdid:", example["wdid"])
109
+ # print("lang:", example["lang"])
110
+ # print("vector shape:", vector.shape)
111
+ # print("first 5 values:", vector[:5])
112
  ```
113
 
114
  ---
 
116
  ## Dataset Creation
117
 
118
  ### Source Data
119
+ The dataset is derived from [Wikidata](https://www.wikidata.org/), the world’s largest free and open knowledge graph that can be read and edited by both humans and machines. It provides structured data for Wikimedia projects such as Wikipedia, Wikisource, WikiCommons, WikiCite, and Wikivoyage, as well as applications and services outside Wikimedia. Launched in 2012 by Wikimedia Deutschland and the Wikimedia Foundation, Wikidata has grown into the world’s largest collaboratively edited knowledge graph, containing over 112 million structured data objects. It is maintained by a community of 24,000+ monthly contributors and is available in over 300 languages.
120
 
121
  ### Entity Selection
122
  Entities are included only if they satisfy the following criteria:
 
125
  3. The entity has either:
126
  - A description in the target language (or in ‘mul’), or
127
  - At least one statement associated with the entity.
128
+ 4. Our team has the capacity to prioritise, extract, transform, and load the specified language.
 
129
 
130
  ### Vector Generation
131
  The embeddings were computed using [jina-embeddings-v3](https://huggingface.co/jinaai/jina-embeddings-v3), a multilingual embedding model from [Jina AI](https://jina.ai/). For this dataset, vectors were generated with:
 
133
  * task: `retrieval.passage`
134
  * embedding size: `512`
135
 
136
+ For each entity, a textual representation was constructed from its label, description, and serialised statements and encoded into a vector. These textual representations were generated using a pipeline available via the [Wikidata Textifier API](https://wd-textify.wmcloud.org/) ([docs](https://wd-textify.wmcloud.org/docs)). Further details about the embedding pipeline, text construction, and infrastructure used to generate the vectors are available on the [project page](https://www.wikidata.org/wiki/Wikidata:Vector_Database).
137
 
138
  ---
139
 
140
  ## Limitations
141
+ - The embedding model is not knowledge graph–native. Embeddings are generated from flattened, textual representations of entities rather than directly from the graph structure of Wikidata. This implies that structural relationships in the knowledge graph are captured only indirectly through their textual representations.
142
+ - Only entities with at least one Wikipedia sitelink and sufficient textual information are included (see above).
143
+ - Data updates are limited to the September 18, 2024, Wikidata Data Dump, and changes after this date are not reflected.