Datasets:
File size: 9,961 Bytes
e0ba65e 45f1686 e0ba65e 45f1686 e0ba65e 45f1686 e0ba65e 45f1686 e0ba65e 45f1686 e0ba65e 45f1686 e0ba65e 45f1686 e0ba65e 45f1686 e0ba65e 45f1686 e0ba65e 45f1686 e0ba65e 45f1686 e0ba65e 45f1686 e0ba65e 45f1686 e0ba65e 45f1686 e0ba65e 45f1686 e0ba65e 45f1686 e0ba65e 45f1686 e0ba65e 45f1686 e0ba65e ceaa4d3 e0ba65e 45f1686 e0ba65e ceaa4d3 e0ba65e 45f1686 e0ba65e ceaa4d3 e0ba65e ceaa4d3 e0ba65e ceaa4d3 e0ba65e ceaa4d3 e0ba65e a092c6f e0ba65e | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 | ---
license: cc-by-sa-4.0
language:
- en
pretty_name: CS Knowledge Graph (OpenAlex)
size_categories:
- 10M<n<100M
task_categories:
- graph-ml
- feature-extraction
tags:
- knowledge-graph
- openalex
- computer-science
- bibliographic
- citation-network
- co-authorship
- scholarly
- link-prediction
- node-classification
configs:
- config_name: 1k_nodes
default: true
data_files:
- split: train
path: 1k/nodes.parquet
- config_name: 1k_edges
data_files:
- split: train
path: 1k/edges.parquet
- config_name: 10k_nodes
data_files:
- split: train
path: 10k/nodes.parquet
- config_name: 10k_edges
data_files:
- split: train
path: 10k/edges.parquet
- config_name: 100k_nodes
data_files:
- split: train
path: 100k/nodes.parquet
- config_name: 100k_edges
data_files:
- split: train
path: 100k/edges.parquet
- config_name: 1m_nodes
data_files:
- split: train
path: 1m/nodes.parquet
- config_name: 1m_edges
data_files:
- split: train
path: 1m/edges.parquet
- config_name: 10m_nodes
data_files:
- split: train
path: 10m/nodes.parquet
- config_name: 10m_edges
data_files:
- split: train
path: 10m/edges.parquet
---
# CS Knowledge Graph Dataset (OpenAlex)
A multi-scale heterogeneous knowledge graph of Computer Science scholarly data,
built from [OpenAlex](https://openalex.org). Each scale is an independent,
self-contained subgraph centered on Computer Science papers, their authors,
publication venues, and concept tags, plus the relationships between them.
The dataset is intended for research on knowledge graph embeddings, link
prediction, node classification, scholarly recommendation, and graph neural
networks at varying scales of compute.
## Scales
Five scales are provided so the same pipeline can be benchmarked from quick
prototyping (1k) to large-scale training (10m). Each scale is a strict superset
of the smaller ones in spirit, but is sampled independently — treat them as
five separate graphs rather than nested cuts.
| Config | Nodes | Edges | Parquet size | Raw SQLite (zip) |
|--------|-----------:|------------:|-------------:|-----------------:|
| `1k` | 5,237 | 32,655 | 277 KB | 961 KB |
| `10k` | 44,933 | 252,631 | 2.0 MB | 7.7 MB |
| `100k` | 348,983 | 2,162,386 | 16 MB | 68 MB |
| `1m` | 2,384,896 | 13,530,177 | 117 MB | 597 MB |
| `10m` | 7,210,506 | 44,631,484 | 384 MB | 2.1 GB |
## Schema
Each scale exposes two configs, `<scale>_nodes` and `<scale>_edges`. They
share a single split named `train` (a `datasets` convention — there is no
held-out test split, since the intended use is to define your own splits over
the graph).
### `nodes` config
| Column | Type | Description |
|--------------|--------|-----------------------------------------------------------------------|
| `node_id` | string | Unique node identifier, prefixed by type (e.g. `paper_W2604738573`). |
| `node_name` | string | Human-readable name (paper title, author display name, venue, etc.). |
| `node_type` | string | One of `Paper`, `Author`, `Venue`, `Concept`. |
| `attributes` | string | Type-specific attributes encoded as a JSON string (see below). |
The `attributes` JSON object has different keys depending on `node_type`:
- **Paper**: `year` (int), `citation_count` (int), `venue` (string), `type` (string, e.g. `article`)
- **Author**: `h_index` (int or null), `citation_count` (int or null), `works_count` (int or null), `institution` (string)
- **Venue**: `type` (string, e.g. `journal`, `conference`), `publisher` (string)
- **Concept**: `domain` (string, e.g. `CS`)
### `edges` config
| Column | Type | Description |
|------------|--------|--------------------------------------------------------------------------------------------|
| `source` | string | `node_id` of the source node. |
| `relation` | string | One of `AUTHORED`, `CITES`, `PUBLISHED_IN`, `BELONGS_TO`, `COLLABORATES_WITH`. |
| `target` | string | `node_id` of the target node. |
| `year` | float | Year associated with the edge when applicable (e.g. publication year); `null` otherwise. |
Relation semantics:
- `AUTHORED` — `Author → Paper`
- `CITES` — `Paper → Paper`
- `PUBLISHED_IN` — `Paper → Venue`
- `BELONGS_TO` — `Paper → Concept`
- `COLLABORATES_WITH` — `Author → Author` (co-authorship; symmetric, may appear in both directions)
**Dangling `CITES` targets.** Each scale is built from a Computer Science slice
of OpenAlex, so the `nodes` table only contains CS papers (plus their authors,
venues, and concepts). However, those CS papers may cite papers from outside
CS — those external papers appear as `target` in `CITES` edges but are **not**
present in the `nodes` table. Filter or add placeholder nodes as appropriate
for your task. Sources are always present in `nodes`; only `CITES` targets can
be dangling.
## Usage
### Load with the `datasets` library
```python
from datasets import load_dataset
# Configs follow the pattern "<scale>_nodes" / "<scale>_edges".
# Scales: 1k, 10k, 100k, 1m, 10m
nodes = load_dataset("jugalgajjar/CS-Knowledge-Graph-Dataset", "10k_nodes", split="train")
edges = load_dataset("jugalgajjar/CS-Knowledge-Graph-Dataset", "10k_edges", split="train")
print(nodes[0])
# {'node_id': 'paper_W...', 'node_name': '...', 'node_type': 'Paper',
# 'attributes': '{"year": 2016, "citation_count": 1816, ...}'}
import json
attrs = json.loads(nodes[0]["attributes"])
```
### Load directly with pandas / pyarrow
```python
import pandas as pd
nodes = pd.read_parquet("hf://datasets/jugalgajjar/CS-Knowledge-Graph-Dataset/100k/nodes.parquet")
edges = pd.read_parquet("hf://datasets/jugalgajjar/CS-Knowledge-Graph-Dataset/100k/edges.parquet")
```
### Build a PyTorch Geometric graph
```python
import numpy as np
import torch
from torch_geometric.data import HeteroData
from datasets import load_dataset
scale = "10k"
nodes = load_dataset("jugalgajjar/CS-Knowledge-Graph-Dataset", f"{scale}_nodes", split="train").to_pandas()
edges = load_dataset("jugalgajjar/CS-Knowledge-Graph-Dataset", f"{scale}_edges", split="train").to_pandas()
# Build per-type id -> contiguous index maps
data = HeteroData()
id_maps = {}
for ntype, group in nodes.groupby("node_type"):
ids = group["node_id"].tolist()
id_maps[ntype] = {nid: i for i, nid in enumerate(ids)}
data[ntype].num_nodes = len(ids)
# Each node_id is prefixed with its type
type_from_prefix = {"paper": "Paper", "author": "Author", "venue": "Venue", "concept": "Concept"}
def ntype_of(nid: str) -> str:
return type_from_prefix[nid.split("_", 1)[0]]
# Drop CITES edges whose target isn't in the node set (cross-domain citations).
node_id_set = set(nodes["node_id"])
edges = edges[edges["target"].isin(node_id_set)].reset_index(drop=True)
for relation, group in edges.groupby("relation"):
src_type = ntype_of(group["source"].iloc[0])
dst_type = ntype_of(group["target"].iloc[0])
src = group["source"].map(id_maps[src_type]).to_numpy(dtype=np.int64)
dst = group["target"].map(id_maps[dst_type]).to_numpy(dtype=np.int64)
data[src_type, relation, dst_type].edge_index = torch.from_numpy(np.stack([src, dst]))
print(data)
```
## Raw SQLite databases
In addition to the Parquet files, the original SQLite databases used to build
each scale are available under `raw/`:
```
raw/cs1k_openalex.db.zip
raw/cs10k_openalex.db.zip
raw/cs100k_openalex.db.zip
raw/cs1m_openalex.db.zip
raw/cs10m_openalex.db.zip
```
These are useful if you want to run SQL queries over the source records
directly. Download with `huggingface_hub`:
```python
from huggingface_hub import hf_hub_download
path = hf_hub_download(
repo_id="jugalgajjar/CS-Knowledge-Graph-Dataset",
repo_type="dataset",
filename="raw/cs10k_openalex.db.zip",
)
```
## Citation
This dataset was introduced in the following paper. **If you use this dataset
in your work, please cite it.** Please also cite OpenAlex (the source data;
see their [citation guidance](https://docs.openalex.org)).
**BibTeX:**
```bibtex
@inproceedings{gajjar2025hypercomplex,
title={HyperComplEx: Adaptive Multi-Space Knowledge Graph Embeddings},
author={Gajjar, Jugal and Ranaware, Kaustik and Subramaniakuppusamy, Kamalasankari and Gandhi, Vaibhav C},
booktitle={2025 IEEE International Conference on Big Data (BigData)},
pages={5623--5631},
year={2025},
organization={IEEE}
}
```
**APA:**
> Gajjar, J., Ranaware, K., Subramaniakuppusamy, K., & Gandhi, V. C. (2025, December). HyperComplEx: Adaptive Multi-Space Knowledge Graph Embeddings. In *2025 IEEE International Conference on Big Data (BigData)* (pp. 5623–5631). IEEE.
## Source and licensing
- **Source data:** [OpenAlex](https://openalex.org), released into the public
domain under [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
- **This derived dataset:** licensed under
[CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/). You may use,
modify, and redistribute it, including commercially, provided you give
attribution and license your derivative works under the same terms.
## Repository layout
```
.
├── README.md
├── 1k/
│ ├── nodes.parquet
│ └── edges.parquet
├── 10k/ (same layout)
├── 100k/ (same layout)
├── 1m/ (same layout)
├── 10m/ (same layout)
└── raw/
├── cs1k_openalex.db.zip
├── cs10k_openalex.db.zip
├── cs100k_openalex.db.zip
├── cs1m_openalex.db.zip
└── cs10m_openalex.db.zip
```
|