Datasets:
Size:
10M - 100M
metadata
language: []
pretty_name: alagesse/gnn-datasets
configs:
- config_name: zinc-12k
data_files:
- split: train
path: zinc-12k/train*.parquet
- split: validation
path: zinc-12k/validation*.parquet
- split: test
path: zinc-12k/test*.parquet
- config_name: zinc-250k
data_files:
- split: train
path: zinc-250k/train*.parquet
- split: validation
path: zinc-250k/validation*.parquet
- split: test
path: zinc-250k/test*.parquet
- config_name: aqsol
data_files:
- split: train
path: aqsol/train*.parquet
- split: validation
path: aqsol/validation*.parquet
- split: test
path: aqsol/test*.parquet
- config_name: pcqm4mv2-2d
data_files:
- split: train
path: pcqm4mv2-2d/train*.parquet
- split: validation
path: pcqm4mv2-2d/validation*.parquet
- split: test_dev
path: pcqm4mv2-2d/test_dev*.parquet
- split: test_challenge
path: pcqm4mv2-2d/test_challenge*.parquet
- config_name: pcqm4mv2-3d
data_files:
- split: train
path: pcqm4mv2-3d/train*.parquet
- config_name: pcqm4mv2-smiles
data_files:
- split: train
path: pcqm4mv2-smiles/train*.parquet
- split: validation
path: pcqm4mv2-smiles/validation*.parquet
- split: test_dev
path: pcqm4mv2-smiles/test_dev*.parquet
- split: test_challenge
path: pcqm4mv2-smiles/test_challenge*.parquet
- config_name: qm9
data_files:
- split: train
path: qm9/train*.parquet
- split: validation
path: qm9/validation*.parquet
- split: test
path: qm9/test*.parquet
- config_name: qm9-smiles
data_files:
- split: train
path: qm9-smiles/train*.parquet
- split: validation
path: qm9-smiles/validation*.parquet
- split: test
path: qm9-smiles/test*.parquet
alagesse/gnn-datasets
This repository hosts multiple standardized graph datasets. Load any configuration via:
from datasets import load_dataset
ds = load_dataset("alagesse/gnn-datasets", "<config_name>", split="<split_name>")
Available configurations:
| Configuration |
|---|
zinc-12k |
zinc-250k |
aqsol |
pcqm4mv2-2d |
pcqm4mv2-3d |
pcqm4mv2-smiles |
qm9 |
qm9-smiles |