update readme
Browse files
README.md
CHANGED
|
@@ -1,3 +1,108 @@
|
|
| 1 |
---
|
| 2 |
license: cc0-1.0
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: cc0-1.0
|
| 3 |
---
|
| 4 |
+
|
| 5 |
+
# Wikidata Extraction
|
| 6 |
+
|
| 7 |
+
This dataset contains all RDF triples extracted from the latest Wikidata, converted from the N-Triples format to Parquet.
|
| 8 |
+
|
| 9 |
+
The data originates from [Wikidata](https://www.wikidata.org/), a free and open knowledge base that acts as central storage for structured data used by Wikipedia and other Wikimedia projects. The source file is the "truthy" N-Triples dump (`latest-truthy.nt.bz2`), which contains only the current, non-deprecated statements.
|
| 10 |
+
|
| 11 |
+
The code to extract this data is available at [github.com/piebro/wikidata-extraction](https://github.com/piebro/wikidata-extraction).
|
| 12 |
+
|
| 13 |
+
## Schema
|
| 14 |
+
|
| 15 |
+
Each row represents a single RDF triple with four columns:
|
| 16 |
+
|
| 17 |
+
| Column | Type | Description |
|
| 18 |
+
|-------------|--------|--------------------------------------------------------------------|
|
| 19 |
+
| `subject` | string | The entity being described (typically a Wikidata Q-ID URL) |
|
| 20 |
+
| `predicate` | string | The property or relationship (typically a Wikidata P-ID URL) |
|
| 21 |
+
| `object` | string | The value, which can be another entity, literal, or external ID |
|
| 22 |
+
| `language` | string | Language tag for literals (e.g., "en", "de"), null otherwise |
|
| 23 |
+
|
| 24 |
+
### Example Values
|
| 25 |
+
|
| 26 |
+
```
|
| 27 |
+
subject: http://www.wikidata.org/entity/Q42
|
| 28 |
+
predicate: http://www.w3.org/2000/01/rdf-schema#label
|
| 29 |
+
object: Douglas Adams
|
| 30 |
+
language: en
|
| 31 |
+
```
|
| 32 |
+
|
| 33 |
+
This triple means: "Q42 (Douglas Adams) has the English label 'Douglas Adams'".
|
| 34 |
+
|
| 35 |
+
```
|
| 36 |
+
subject: http://www.wikidata.org/entity/Q42
|
| 37 |
+
predicate: http://www.wikidata.org/prop/direct/P31
|
| 38 |
+
object: http://www.wikidata.org/entity/Q5
|
| 39 |
+
language: null
|
| 40 |
+
```
|
| 41 |
+
|
| 42 |
+
This triple means: "Q42 (Douglas Adams) is an instance of (P31) Q5 (human)". The language is null because the object is a URI, not a language-tagged literal.
|
| 43 |
+
|
| 44 |
+
## Data Organization
|
| 45 |
+
|
| 46 |
+
### Triplets
|
| 47 |
+
|
| 48 |
+
The `triplets/` folder contains the raw RDF triples partitioned into Parquet files (`triplets/*.parquet`).
|
| 49 |
+
|
| 50 |
+
### Extractions
|
| 51 |
+
|
| 52 |
+
The `extractions/` folder contains extracted lookup tables and themed datasets:
|
| 53 |
+
|
| 54 |
+
| File | Description |
|
| 55 |
+
|------|-------------|
|
| 56 |
+
| `entity_labels.parquet` | Maps entities to their labels |
|
| 57 |
+
| `predicate_labels.parquet` | Maps predicates to their labels and descriptions |
|
| 58 |
+
| `music/spotify_artist.parquet` | Spotify artist data |
|
| 59 |
+
| `music/spotify_album.parquet` | Spotify album data |
|
| 60 |
+
| `music/spotify_track.parquet` | Spotify track data |
|
| 61 |
+
| `video/youtube_channel.parquet` | YouTube channel data |
|
| 62 |
+
| `video/youtube_video.parquet` | YouTube video data |
|
| 63 |
+
| `video/letterboxd_film.parquet` | Letterboxd film data |
|
| 64 |
+
| `social/bluesky.parquet` | Bluesky account data |
|
| 65 |
+
| `social/subreddit.parquet` | Subreddit data |
|
| 66 |
+
| `social/patreon.parquet` | Patreon creator data |
|
| 67 |
+
| `book/goodread_book.parquet` | Goodreads book data |
|
| 68 |
+
| `book/gutenberg_book.parquet` | Project Gutenberg book data |
|
| 69 |
+
| `other/github.parquet` | GitHub repository data |
|
| 70 |
+
| `other/website.parquet` | Website data |
|
| 71 |
+
| `other/non_profit_organization.parquet` | Non-profit organization data |
|
| 72 |
+
|
| 73 |
+
## Usage
|
| 74 |
+
|
| 75 |
+
Query the dataset using any Parquet-compatible tool (DuckDB, Pandas, Polars, etc.):
|
| 76 |
+
|
| 77 |
+
### Find All Properties of an Entity
|
| 78 |
+
|
| 79 |
+
```python
|
| 80 |
+
import duckdb
|
| 81 |
+
|
| 82 |
+
df = duckdb.sql("""
|
| 83 |
+
SELECT *
|
| 84 |
+
FROM 'triplets/*.parquet'
|
| 85 |
+
WHERE subject = 'http://www.wikidata.org/entity/Q42'
|
| 86 |
+
""").df()
|
| 87 |
+
print(df)
|
| 88 |
+
```
|
| 89 |
+
|
| 90 |
+
### Look Up Entity Labels
|
| 91 |
+
|
| 92 |
+
```python
|
| 93 |
+
import duckdb
|
| 94 |
+
|
| 95 |
+
entity_ids = ["Q42", "Q5", "Q64"]
|
| 96 |
+
entity_uris = [f"http://www.wikidata.org/entity/{e}" for e in entity_ids]
|
| 97 |
+
|
| 98 |
+
df = duckdb.sql(f"""
|
| 99 |
+
SELECT entity, label
|
| 100 |
+
FROM 'extractions/entity_labels.parquet'
|
| 101 |
+
WHERE entity IN {tuple(entity_uris)}
|
| 102 |
+
""").df()
|
| 103 |
+
print(df)
|
| 104 |
+
```
|
| 105 |
+
|
| 106 |
+
## License
|
| 107 |
+
|
| 108 |
+
Wikidata content is available under [CC0 1.0 Universal](https://creativecommons.org/publicdomain/zero/1.0/).
|