Datasets:
Add citation for HyperComplEx (Gajjar et al., 2025) — paper that introduced this dataset
a092c6f verified | license: cc-by-sa-4.0 | |
| language: | |
| - en | |
| pretty_name: CS Knowledge Graph (OpenAlex) | |
| size_categories: | |
| - 10M<n<100M | |
| task_categories: | |
| - graph-ml | |
| - feature-extraction | |
| tags: | |
| - knowledge-graph | |
| - openalex | |
| - computer-science | |
| - bibliographic | |
| - citation-network | |
| - co-authorship | |
| - scholarly | |
| - link-prediction | |
| - node-classification | |
| configs: | |
| - config_name: 1k_nodes | |
| default: true | |
| data_files: | |
| - split: train | |
| path: 1k/nodes.parquet | |
| - config_name: 1k_edges | |
| data_files: | |
| - split: train | |
| path: 1k/edges.parquet | |
| - config_name: 10k_nodes | |
| data_files: | |
| - split: train | |
| path: 10k/nodes.parquet | |
| - config_name: 10k_edges | |
| data_files: | |
| - split: train | |
| path: 10k/edges.parquet | |
| - config_name: 100k_nodes | |
| data_files: | |
| - split: train | |
| path: 100k/nodes.parquet | |
| - config_name: 100k_edges | |
| data_files: | |
| - split: train | |
| path: 100k/edges.parquet | |
| - config_name: 1m_nodes | |
| data_files: | |
| - split: train | |
| path: 1m/nodes.parquet | |
| - config_name: 1m_edges | |
| data_files: | |
| - split: train | |
| path: 1m/edges.parquet | |
| - config_name: 10m_nodes | |
| data_files: | |
| - split: train | |
| path: 10m/nodes.parquet | |
| - config_name: 10m_edges | |
| data_files: | |
| - split: train | |
| path: 10m/edges.parquet | |
| # CS Knowledge Graph Dataset (OpenAlex) | |
| A multi-scale heterogeneous knowledge graph of Computer Science scholarly data, | |
| built from [OpenAlex](https://openalex.org). Each scale is an independent, | |
| self-contained subgraph centered on Computer Science papers, their authors, | |
| publication venues, and concept tags, plus the relationships between them. | |
| The dataset is intended for research on knowledge graph embeddings, link | |
| prediction, node classification, scholarly recommendation, and graph neural | |
| networks at varying scales of compute. | |
| ## Scales | |
| Five scales are provided so the same pipeline can be benchmarked from quick | |
| prototyping (1k) to large-scale training (10m). Each scale is a strict superset | |
| of the smaller ones in spirit, but is sampled independently — treat them as | |
| five separate graphs rather than nested cuts. | |
| | Config | Nodes | Edges | Parquet size | Raw SQLite (zip) | | |
| |--------|-----------:|------------:|-------------:|-----------------:| | |
| | `1k` | 5,237 | 32,655 | 277 KB | 961 KB | | |
| | `10k` | 44,933 | 252,631 | 2.0 MB | 7.7 MB | | |
| | `100k` | 348,983 | 2,162,386 | 16 MB | 68 MB | | |
| | `1m` | 2,384,896 | 13,530,177 | 117 MB | 597 MB | | |
| | `10m` | 7,210,506 | 44,631,484 | 384 MB | 2.1 GB | | |
| ## Schema | |
| Each scale exposes two configs, `<scale>_nodes` and `<scale>_edges`. They | |
| share a single split named `train` (a `datasets` convention — there is no | |
| held-out test split, since the intended use is to define your own splits over | |
| the graph). | |
| ### `nodes` config | |
| | Column | Type | Description | | |
| |--------------|--------|-----------------------------------------------------------------------| | |
| | `node_id` | string | Unique node identifier, prefixed by type (e.g. `paper_W2604738573`). | | |
| | `node_name` | string | Human-readable name (paper title, author display name, venue, etc.). | | |
| | `node_type` | string | One of `Paper`, `Author`, `Venue`, `Concept`. | | |
| | `attributes` | string | Type-specific attributes encoded as a JSON string (see below). | | |
| The `attributes` JSON object has different keys depending on `node_type`: | |
| - **Paper**: `year` (int), `citation_count` (int), `venue` (string), `type` (string, e.g. `article`) | |
| - **Author**: `h_index` (int or null), `citation_count` (int or null), `works_count` (int or null), `institution` (string) | |
| - **Venue**: `type` (string, e.g. `journal`, `conference`), `publisher` (string) | |
| - **Concept**: `domain` (string, e.g. `CS`) | |
| ### `edges` config | |
| | Column | Type | Description | | |
| |------------|--------|--------------------------------------------------------------------------------------------| | |
| | `source` | string | `node_id` of the source node. | | |
| | `relation` | string | One of `AUTHORED`, `CITES`, `PUBLISHED_IN`, `BELONGS_TO`, `COLLABORATES_WITH`. | | |
| | `target` | string | `node_id` of the target node. | | |
| | `year` | float | Year associated with the edge when applicable (e.g. publication year); `null` otherwise. | | |
| Relation semantics: | |
| - `AUTHORED` — `Author → Paper` | |
| - `CITES` — `Paper → Paper` | |
| - `PUBLISHED_IN` — `Paper → Venue` | |
| - `BELONGS_TO` — `Paper → Concept` | |
| - `COLLABORATES_WITH` — `Author → Author` (co-authorship; symmetric, may appear in both directions) | |
| **Dangling `CITES` targets.** Each scale is built from a Computer Science slice | |
| of OpenAlex, so the `nodes` table only contains CS papers (plus their authors, | |
| venues, and concepts). However, those CS papers may cite papers from outside | |
| CS — those external papers appear as `target` in `CITES` edges but are **not** | |
| present in the `nodes` table. Filter or add placeholder nodes as appropriate | |
| for your task. Sources are always present in `nodes`; only `CITES` targets can | |
| be dangling. | |
| ## Usage | |
| ### Load with the `datasets` library | |
| ```python | |
| from datasets import load_dataset | |
| # Configs follow the pattern "<scale>_nodes" / "<scale>_edges". | |
| # Scales: 1k, 10k, 100k, 1m, 10m | |
| nodes = load_dataset("jugalgajjar/CS-Knowledge-Graph-Dataset", "10k_nodes", split="train") | |
| edges = load_dataset("jugalgajjar/CS-Knowledge-Graph-Dataset", "10k_edges", split="train") | |
| print(nodes[0]) | |
| # {'node_id': 'paper_W...', 'node_name': '...', 'node_type': 'Paper', | |
| # 'attributes': '{"year": 2016, "citation_count": 1816, ...}'} | |
| import json | |
| attrs = json.loads(nodes[0]["attributes"]) | |
| ``` | |
| ### Load directly with pandas / pyarrow | |
| ```python | |
| import pandas as pd | |
| nodes = pd.read_parquet("hf://datasets/jugalgajjar/CS-Knowledge-Graph-Dataset/100k/nodes.parquet") | |
| edges = pd.read_parquet("hf://datasets/jugalgajjar/CS-Knowledge-Graph-Dataset/100k/edges.parquet") | |
| ``` | |
| ### Build a PyTorch Geometric graph | |
| ```python | |
| import numpy as np | |
| import torch | |
| from torch_geometric.data import HeteroData | |
| from datasets import load_dataset | |
| scale = "10k" | |
| nodes = load_dataset("jugalgajjar/CS-Knowledge-Graph-Dataset", f"{scale}_nodes", split="train").to_pandas() | |
| edges = load_dataset("jugalgajjar/CS-Knowledge-Graph-Dataset", f"{scale}_edges", split="train").to_pandas() | |
| # Build per-type id -> contiguous index maps | |
| data = HeteroData() | |
| id_maps = {} | |
| for ntype, group in nodes.groupby("node_type"): | |
| ids = group["node_id"].tolist() | |
| id_maps[ntype] = {nid: i for i, nid in enumerate(ids)} | |
| data[ntype].num_nodes = len(ids) | |
| # Each node_id is prefixed with its type | |
| type_from_prefix = {"paper": "Paper", "author": "Author", "venue": "Venue", "concept": "Concept"} | |
| def ntype_of(nid: str) -> str: | |
| return type_from_prefix[nid.split("_", 1)[0]] | |
| # Drop CITES edges whose target isn't in the node set (cross-domain citations). | |
| node_id_set = set(nodes["node_id"]) | |
| edges = edges[edges["target"].isin(node_id_set)].reset_index(drop=True) | |
| for relation, group in edges.groupby("relation"): | |
| src_type = ntype_of(group["source"].iloc[0]) | |
| dst_type = ntype_of(group["target"].iloc[0]) | |
| src = group["source"].map(id_maps[src_type]).to_numpy(dtype=np.int64) | |
| dst = group["target"].map(id_maps[dst_type]).to_numpy(dtype=np.int64) | |
| data[src_type, relation, dst_type].edge_index = torch.from_numpy(np.stack([src, dst])) | |
| print(data) | |
| ``` | |
| ## Raw SQLite databases | |
| In addition to the Parquet files, the original SQLite databases used to build | |
| each scale are available under `raw/`: | |
| ``` | |
| raw/cs1k_openalex.db.zip | |
| raw/cs10k_openalex.db.zip | |
| raw/cs100k_openalex.db.zip | |
| raw/cs1m_openalex.db.zip | |
| raw/cs10m_openalex.db.zip | |
| ``` | |
| These are useful if you want to run SQL queries over the source records | |
| directly. Download with `huggingface_hub`: | |
| ```python | |
| from huggingface_hub import hf_hub_download | |
| path = hf_hub_download( | |
| repo_id="jugalgajjar/CS-Knowledge-Graph-Dataset", | |
| repo_type="dataset", | |
| filename="raw/cs10k_openalex.db.zip", | |
| ) | |
| ``` | |
| ## Citation | |
| This dataset was introduced in the following paper. **If you use this dataset | |
| in your work, please cite it.** Please also cite OpenAlex (the source data; | |
| see their [citation guidance](https://docs.openalex.org)). | |
| **BibTeX:** | |
| ```bibtex | |
| @inproceedings{gajjar2025hypercomplex, | |
| title={HyperComplEx: Adaptive Multi-Space Knowledge Graph Embeddings}, | |
| author={Gajjar, Jugal and Ranaware, Kaustik and Subramaniakuppusamy, Kamalasankari and Gandhi, Vaibhav C}, | |
| booktitle={2025 IEEE International Conference on Big Data (BigData)}, | |
| pages={5623--5631}, | |
| year={2025}, | |
| organization={IEEE} | |
| } | |
| ``` | |
| **APA:** | |
| > Gajjar, J., Ranaware, K., Subramaniakuppusamy, K., & Gandhi, V. C. (2025, December). HyperComplEx: Adaptive Multi-Space Knowledge Graph Embeddings. In *2025 IEEE International Conference on Big Data (BigData)* (pp. 5623–5631). IEEE. | |
| ## Source and licensing | |
| - **Source data:** [OpenAlex](https://openalex.org), released into the public | |
| domain under [CC0](https://creativecommons.org/publicdomain/zero/1.0/). | |
| - **This derived dataset:** licensed under | |
| [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/). You may use, | |
| modify, and redistribute it, including commercially, provided you give | |
| attribution and license your derivative works under the same terms. | |
| ## Repository layout | |
| ``` | |
| . | |
| ├── README.md | |
| ├── 1k/ | |
| │ ├── nodes.parquet | |
| │ └── edges.parquet | |
| ├── 10k/ (same layout) | |
| ├── 100k/ (same layout) | |
| ├── 1m/ (same layout) | |
| ├── 10m/ (same layout) | |
| └── raw/ | |
| ├── cs1k_openalex.db.zip | |
| ├── cs10k_openalex.db.zip | |
| ├── cs100k_openalex.db.zip | |
| ├── cs1m_openalex.db.zip | |
| └── cs10m_openalex.db.zip | |
| ``` | |