wikidata-extraction / README.md
piebro's picture
update readme
69c4dca
---
license: cc0-1.0
---
# Wikidata Extraction
This dataset contains all RDF triples extracted from the latest Wikidata, converted from the N-Triples format to Parquet.
The data originates from [Wikidata](https://www.wikidata.org/), a free and open knowledge base that acts as central storage for structured data used by Wikipedia and other Wikimedia projects. The source file is the "truthy" N-Triples dump (`latest-truthy.nt.bz2`), which contains only the current, non-deprecated statements.
The code to extract this data is available at [github.com/piebro/wikidata-extraction](https://github.com/piebro/wikidata-extraction).
## Schema
Each row represents a single RDF triple with four columns:
| Column | Type | Description |
|-------------|--------|--------------------------------------------------------------------|
| `subject` | string | The entity being described (typically a Wikidata Q-ID URL) |
| `predicate` | string | The property or relationship (typically a Wikidata P-ID URL) |
| `object` | string | The value, which can be another entity, literal, or external ID |
| `language` | string | Language tag for literals (e.g., "en", "de"), null otherwise |
### Example Values
```
subject: http://www.wikidata.org/entity/Q42
predicate: http://www.w3.org/2000/01/rdf-schema#label
object: Douglas Adams
language: en
```
This triple means: "Q42 (Douglas Adams) has the English label 'Douglas Adams'".
```
subject: http://www.wikidata.org/entity/Q42
predicate: http://www.wikidata.org/prop/direct/P31
object: http://www.wikidata.org/entity/Q5
language: null
```
This triple means: "Q42 (Douglas Adams) is an instance of (P31) Q5 (human)". The language is null because the object is a URI, not a language-tagged literal.
## Data Organization
### Triplets
The `triplets/` folder contains the raw RDF triples partitioned into Parquet files (`triplets/*.parquet`).
### Extractions
The `extractions/` folder contains extracted lookup tables and themed datasets:
| File | Description |
|------|-------------|
| `entity_labels.parquet` | Maps entities to their labels |
| `predicate_labels.parquet` | Maps predicates to their labels and descriptions |
| `music/spotify_artist.parquet` | Spotify artist data |
| `music/spotify_album.parquet` | Spotify album data |
| `music/spotify_track.parquet` | Spotify track data |
| `video/youtube_channel.parquet` | YouTube channel data |
| `video/youtube_video.parquet` | YouTube video data |
| `video/letterboxd_film.parquet` | Letterboxd film data |
| `social/bluesky.parquet` | Bluesky account data |
| `social/subreddit.parquet` | Subreddit data |
| `social/patreon.parquet` | Patreon creator data |
| `book/goodread_book.parquet` | Goodreads book data |
| `book/gutenberg_book.parquet` | Project Gutenberg book data |
| `other/github.parquet` | GitHub repository data |
| `other/website.parquet` | Website data |
| `other/non_profit_organization.parquet` | Non-profit organization data |
## Usage
Query the dataset using any Parquet-compatible tool (DuckDB, Pandas, Polars, etc.):
### Find All Properties of an Entity
```python
import duckdb
df = duckdb.sql("""
SELECT *
FROM 'triplets/*.parquet'
WHERE subject = 'http://www.wikidata.org/entity/Q42'
""").df()
print(df)
```
### Look Up Entity Labels
```python
import duckdb
entity_ids = ["Q42", "Q5", "Q64"]
entity_uris = [f"http://www.wikidata.org/entity/{e}" for e in entity_ids]
df = duckdb.sql(f"""
SELECT entity, label
FROM 'extractions/entity_labels.parquet'
WHERE entity IN {tuple(entity_uris)}
""").df()
print(df)
```
## License
Wikidata content is available under [CC0 1.0 Universal](https://creativecommons.org/publicdomain/zero/1.0/).