Datasets:
Tasks:
Graph Machine Learning
Modalities:
Text
Formats:
json
Languages:
English
Size:
< 1K
ArXiv:
Tags:
hypergraph
| language: | |
| - en | |
| tags: | |
| - hypergraph | |
| task_categories: | |
| - graph-ml | |
| # email-Enron | |
| [Zenodo](https://zenodo.org/records/10155819) | [Cornell](https://www.cs.cornell.edu/~arb/data/email-Enron/) | [Source Paper](https://arxiv.org/abs/1802.06916) | |
| email-Enron is an undirected hypergraph built from the Enron email corpus, designed for higher-order network / hypergraph machine learning. In email communication, a single message can involve more than two people; this dataset captures that group interaction by modeling each email as a hyperedge connecting the sender and all recipients, while nodes represent Enron email addresses (restricted to a core set of employees). | |
| ## Usage | |
| ```python | |
| dataset = load_dataset("daqh/email-Enron", split="full") | |
| hypergraphs = [xgi.from_hif_dict(d, nodetype=int, edgetype=int) for d in dataset] | |
| ``` | |
| ## Statistics | |
| <div align="center"> | |
| <table> | |
| <tbody> | |
| <tr> | |
| <td colspan="2" align="center"> | |
| <figure> | |
| <img src="assets/hypergraph.png"> | |
| </figure> | |
| </td> | |
| </tr> | |
| <tr> | |
| <td align="center"> | |
| <figure> | |
| <img src="assets/node-degree-distribution.png"> | |
| </figure> | |
| </td> | |
| <td align="center"> | |
| <figure> | |
| <img src="assets/hyperedge-size-distribution.png"> | |
| </figure> | |
| </td> | |
| </tr> | |
| </tbody> | |
| </table> | |
| </div> | |
| ## Content | |
| The hypergraph is stored in HIF (Hypergraph Interchange Format) as a JSON object, following the schema used to exchange higher-order network data across tools. Concretely, the dataset provides the canonical HIF fields-network-type, metadata, nodes, edges, and incidences-so you can reconstruct the full incidence structure without additional processing. | |
| In addition to the raw hypergraph topology, vector features are provided for both nodes and hyperedges (in their attribute dictionaries), enabling out-of-the-box experimentation with representation learning and downstream tasks: | |
| - Spectral features: eigenvectors of the (hypergraph) Laplacian (computed via sparse eigensolvers). | |
| - [Node2Vec](https://arxiv.org/abs/1607.00653) embeddings: random-walk–based structural embeddings. | |
| - [VilLain](https://dl.acm.org/doi/10.1145/3589334.3645454) embeddings: self-supervised hypergraph representation learning via virtual label propagation. | |
| Basic statistics (as packaged here): 143 nodes, 1512 hyperedges, 1 connected component. | |
| ## Citation | |
| ``` | |
| @article{Benson-2018-simplicial, | |
| author = {Benson, Austin R. and Abebe, Rediet and Schaub, Michael T. and Jadbabaie, Ali and Kleinberg, Jon}, | |
| title = {Simplicial closure and higher-order link prediction}, | |
| year = {2018}, | |
| doi = {10.1073/pnas.1800683115}, | |
| publisher = {National Academy of Sciences}, | |
| issn = {0027-8424}, | |
| journal = {Proceedings of the National Academy of Sciences} | |
| } | |
| ``` |