Buckets:
Title: MARIOH: Multiplicity-Aware Hypergraph Reconstruction
URL Source: https://arxiv.org/html/2504.00522
Markdown Content: Back to arXiv
This is experimental HTML to improve accessibility. We invite you to report rendering errors. Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off. Learn more about this project and help improve conversions.
Why HTML? Report Issue Back to Abstract Download PDF Abstract IIntroduction IIPreliminaries and Problem Definition IIIProposed Method IVExperiments VRelated Work VIConclusion References
HTML conversions sometimes display errors due to content that did not convert correctly from the source. This paper uses the following packages that are not yet supported by the HTML conversion tool. Feedback on these issues are not necessary; they are known and are being worked on.
failed: stackengine failed: collcell
Authors: achieve the best HTML results from your LaTeX submissions by following these best practices.
License: CC BY-NC-SA 4.0 arXiv:2504.00522v2 [cs.DB] 06 May 2025 MARIOH: Multiplicity-Aware Hypergraph Reconstruction Kyuhan Lee1,2, Geon Lee1, and Kijung Shin1 1Kim Jaechul Graduate School of AI, KAIST, Seoul, Republic of Korea, 2GraphAI, Daejeon, Republic of Korea {kyuhan.lee, geonlee0325, kijungs}@kaist.ac.kr Abstract
Hypergraphs offer a powerful framework for modeling higher-order interactions that traditional pairwise graphs cannot fully capture. However, practical constraints often lead to their simplification into projected graphs, resulting in substantial information loss and ambiguity in representing higher-order relationships. In this work, we propose MARIOH, a supervised approach for reconstructing the original hypergraph from its projected graph by leveraging edge multiplicity. To overcome the difficulties posed by the large search space, MARIOH integrates several key ideas: (a) identifying provable size- 2 hyperedges, which reduces the candidate search space, (b) predicting the likelihood of candidates being hyperedges by utilizing both structural and multiplicity-related features, and (c) not only targeting promising hyperedge candidates but also examining less confident ones to explore alternative possibilities. Together, these ideas enable MARIOH to efficiently and effectively explore the search space. In our experiments using 10 real-world datasets, MARIOH achieves up to 74.51% higher reconstruction accuracy compared to state-of-the-art methods.
IIntroduction
Hypergraphs extend ordinary graphs by allowing edges, known as hyperedges, to connect more than two nodes. This feature makes hypergraphs useful for modeling higher-order interactions involving multiple entities [1, 2]. High-order interactions occur in various systems, including co-authorship networks, where multiple authors collaborate on a paper [3], email networks, where an email is sent to multiple recipients [3], and social networks with group interactions [4, 5].
Figure 1: Reduction of candidate space using edge multiplicity. When edge multiplicities are known (top and middle rows), the number of potential outputs is significantly reduced compared to cases with unknown multiplicity (bottom row). The known edge multiplicities (e.g., multiplicity 1 and 2) constrain the possible hyperedge structures, limiting the search space and enabling more accurate reconstruction. In contrast, the absence of edge multiplicity information leads to an explosion of candidates, including infinitely many possibilities, complicating the reconstruction process. Figure 2: Example of hypergraph reconstruction on a co-authorship dataset. (a): In the ground-truth hypergraph, each hyperedge represents a set of researchers who co-authored a paper. We focus on the visualized sub-hypergraph β , induced by Jure Leskovec and his randomly chosen ten co-authors. (b): As input, the graph representation πΊ of β is given, where each edge indicates the number of co-authored papers between researchers. (c): SHyRe-Count [6] recovers a subset of ground-truth hyperedges in β along with some false positives (Jaccard similarity
0.333 ). (d): In contrast, our proposed method, MARIOH, exactly restores β (Jaccard and multi-Jaccard similarity
1.000 ). Refer to Section IV-B for detailed settings, and refer to the online appendix [7] for more case studies on the Host-virus and Crimes datasets.
Although hypergraphs offer advantages in modeling higher-order interactions, practical challenges often lead to their simplification into pairwise graphs. Examples include:
β’
Biological Complexes: In cellular systems, multiple proteins and biomolecules interact to form complexes or signal transduction assemblies. However, standard assays, such as the yeast two-hybrid system [8] or tandem affinity purification [9, 10], mainly capture only pairwise interactions.
β’
Social Networks: People interact in groups in meetings and collaborative projects, but available social network data usually record only pairwise interactions [11, 12, 13]. Since group interactions are often brief, context-dependent, and subject to privacy issues, it is hard to observe them directly.
β’
Brain Connectivity: The brain functions through the coordinated activity of multiple regions at once. However, neuroimaging techniques, such as fMRI, typically provide only pairwise correlation measures between brain regions [14].
As a result, the underlying hypergraph datasets are often unavailable [15, 16], with only their simplified graph representations being released [17, 18].
Hypergraph reconstruction, illustrated in Fig. 2, aims to recover the original hypergraph from its graph-based representation, addressing the loss of information from simplification. Simplifying hypergraphs into graphs results in information loss and ambiguity, as different higher-order interactions can share the same pairwise-edge representations[19]. For example, a size-3 clique in the graph could represent a single hyperedge involving three nodes or three separate pairwise hyperedges, making it challenging to determine the original hyperedge structure. Reconstruction is further complicated by the large search spaceβany clique in the graph could potentially correspond to a hyperedge, and the number of cliques is typically orders of magnitude greater than the number of nodes or edges. This complexity grows when the same hyperedge may appear multiple times, as we must determine not only which groups of nodes form hyperedges but also their frequency.
Despite its difficulty, hypergraph reconstruction is crucial for many reasons. First, it improves applicability by enabling the use of hypergraph-based tools, which often outperform their graph-only counterparts in various tasks, including clustering[19], node classification[20, 21, 22, 23], link prediction[24], and anomaly detection[25] (see Section IV-D for our experiments on applicability). Second, it offers storage efficiency. While representing a clique of size π in a graph requires storing ( π 2 ) edges, a hypergraph representation needs only π β’ ( π ) space for the corresponding hyperedge (see the online appendix[7] for detailed storage savings). Third, it enables a deeper understanding of underlying systems (e.g., brains) by uncovering higher-order relationships (e.g., multi-region interactions) that are lost in graph representations but crucial for interpreting individual or system-level behaviors [26, 27].
Existing methods for hypergraph reconstruction include a Bayesian approach proposed by Young et al.[13], which seeks to minimize the number of cliques used to cover the projected graph based on the principle of parsimony. However, this principle does not always hold in practice[6]. Wang and Kleinberg[6] introduced SHyRe, a supervised method that samples cliques from the projected graph and classifies them as either hyperedges or non-hyperedges. While this method leverages machine learning for classification, it relies on sampling, which introduces the issue of false negativesβpotential hyperedges not sampled are missed entirely.
Importantly, these methods often disregard edge multiplicity in the given graph, even though it can significantly reduce the search space. Edge multiplicity indicates how many times a particular pair of nodes (i.e., an edge in the projected graph) co-occurs across different hyperedges. Notably, this differs from hyperedge multiplicity, which indicates how many times an identical hyperedge (the same set of nodes) is repeated within the hypergraph. As illustrated in Fig. 1, when edge multiplicity information is absent, it is unclear how many times higher-order interactions containing each edge occurred (only a minimum of once is guaranteed). This ambiguity causes an explosion in the search space, especially in duplicated cases, where multiple higher-order interactions share the same set of nodes. For instance, a size-4 clique composed entirely of unknown edges can represent an infinite number of possible higher-order interactions, as the exact count for each edge is unknown. In contrast, when edge multiplicity information is available, it significantly reduces the candidate space for duplicated cases, as shown in Fig. 1. Knowing how many times each edge participates in higher-order interactions limits the possible outputs, enabling more accurate reconstruction.
To the best of our knowledge, the only discussion on leveraging multiplicity can be found in the appendix of [6]. With edge multiplicity in mind, the authors proposed an unsupervised approach where, at each iteration, the highest-ranked maximal cliqueβdetermined by specific conditionsβis replaced by the corresponding hyperedge. While this method considers edge multiplicity, it suffers from scalability issues on large datasets, since maximal cliques are replaced one by one, requiring the repetition of computationally expensive searches of them. Moreover, in some cases, its reconstruction accuracy is even worse than SHyRe, which does not utilize edge multiplicity (as shown in[6]), highlighting its limitations.
Driven by these limitations, in this paper, we propose MARIOH, a supervised method for multiplicity-aware hypergraph reconstruction. MARIOH starts with the projected graph and employs a multiplicity-aware classifier to assign prediction scores to potential hyperedge candidates, including maximal cliques and their subsets. It then greedily replaces the highest-scoring candidates with the corresponding hyperedges.
MARIOH has three key components. First, we reduce the search space by identifying size- 2 hyperedge candidates that are theoretically guaranteed to be true hyperedges, minimizing false positives. Second, we use a classifier trained on not just structural but also multiplicity-related features from the projected graph to predict the likelihood of each hyperedge candidate being a true hyperedge, improving selection by accounting for multiplicity. Finally, MARIOH employs a bidirectional search that examines both the most promising and the least promising hyperedge candidates (i.e., cliques). By not just focusing on highly promising maximal cliques but also examining sub-cliques contained in less promising ones, we ensure that important hyperedges are not overlooked.
Through extensive experiments on 10 real-world datasets, we demonstrate the superiority of MARIOH over eight baseline approaches. We summarize its strengths as follows:
β’
Accuracy: MARIOH recovers real-world hypergraphs up to 74.51% more accurately than baseline methods.
β’
Transferability: MARIOH trained on a hypergraph is applied effectively to (the projected graph of) another hypergraph if they are from the same domain (e.g., two co-authorship networks from different fields of study).
β’
Applicability: Compared to using a projected graph, using the hypergraph reconstructed from it by MARIOH improves performance in downstream tasks, including link prediction and node clustering.
Reproducibility: The code and data used in the paper can be found at https://github.com/KyuhanLee/MARIOH.
IIPreliminaries and Problem Definition II-APreliminaries
Let π± be a set of nodes. A hypergraph is represented as a pair β
( π± , β° β β ) , where β° β β is the multiset of hyperedges. Each hyperedge π β β° β β is a subset of π± with at least two nodes, i.e., π β π± and | π | β₯ 2 . That is, multiple hyperedges comprising the same set of nodes can exist in the multiset β° β β . We define the multiplicity π β β’ ( π ) of a hyperedge π β β° β as the number of times π appears in β° β β , i.e., π β β’ ( π )
| { π β² β β° β β : π β²
π } | . Equivalently, the hypergraph can be represented as β
( π± , β° β , π β ) , where β° β is the set of unique hyperedges in the multiset β° β β , and π β : β° β β β β is a function that assigns the multiplicity of each hyperedge.
Clique expansion of a hypergraph β results in its (weighted) projected graph π’
( π± , β° π’ , π ) , where β° π’ is the set of node pairs that co-appear in at least one hyperedge in β° β :
β° π’
{ { π’ , π£ } : β π β β° β β’ s.t. β’ { π’ , π£ } β π } .
The weight π π’ , π£ of an edge { π’ , π£ } corresponds to the number of hyperedges in β that contain both nodes:
π π’ , π£
β π β β° β β π β’ ( { π’ , π£ } β π )
β π β β° β π β β’ ( π ) β π β’ ( { π’ , π£ } β π ) ,
where π β’ ( β ) is the indicator function.
A clique in the projected graph π’
( π± , β° π’ , π ) is a subset π β π± such that every pair of distinct nodes in π is connected by an edge, i.e., β π’ , π£ β π , π’ β π£ βΉ { π’ , π£ } β β° π’ . That is, a subset of nodes π forms a clique if and only if the subgraph of π’ induced by π is a complete graph. A maximal clique is a clique that is not a subset of any larger clique in π’ .
Figure 3: Example procedure of MARIOH. From a projected graph π’ (top), MARIOH reconstructs a hypergraph β ^ (bottom). First, MARIOH identifies edges in π’ that are theoretically guaranteed to correspond to size- 2 hyperedges. These edges are directly incorporated into β ^ and removed from π’ . Then, MARIOH identifies the maximal cliques in π’ and predicts the likelihood of each clique using a classifier trained on multiplicity-aware features. Based on the predicted likelihood, MARIOH employs a bidirectional search that identifies (1) cliques with high estimated likelihood (e.g., (A) { 5 , 6 , 7 } and (B) { 2 , 3 , 5 , 6 } ) and (2) sub-cliques with high likelihood (e.g., (Cβ) { 6 , 11 } ) but are hidden within larger low-likelihood cliques (e.g., (C) { 6 , 10 , 11 } ), as hyperedges. However, clique (B) { 2 , 3 , 5 , 6 } is not identified as a hyperedge since it no longer exists in the updated projected graph after removing (A) { 5 , 6 , 7 } . This is repeated until no edges remain in π’ . II-BProblem Definition
In this work, we generally follow the supervised problem setting in[6] (a detailed comparison between the settings is provided below). As discussed in Sect. I, reconstructing a hypergraph from its projected graph is extremely difficult due to the loss of higher-order relationships. However, if available, a hypergraph from the same domain provides valuable domain-specific information for the problem, since each domain has unique structural patterns and characteristics[28, 29, 30]. Thus, Wang and Kleinberg[6] assume the existence of such a hypergraph, as part of the input, for supervision, leading to the following problem definition:
Problem 1 (Supervised Hypergraph Reconstruction): β’
Given: Source and target (hyper)graph information:
β
a source projected graph π’ ( π )
( π± ( π ) , β° π’ ( π ) , π ( π ) )
β
a corresponding source hypergraph β ( π )
( π± ( π ) , β° β β ( π ) )
β
a target projected graph π’ ( π )
( π± ( π ) , β° π’ ( π ) , π ( π ) )
β’
Find: a reconstructed hypergraph β ^ ( π )
( π± ( π ) , β° ^ β β ( π ) ) corresponding to the target projected graph π’ ( π )
β’
to Maximize: the reconstruction accuracy.
Note that the source and target (hyper)graphs are from the same domain. Learning to reconstruct the source projected graph π’ ( π ) into the original source hypergraph β ( π ) provides valuable guidance for accurately reconstructing the target projected graph π’ ( π ) into a target hypergraph β ( π ) .
Reconstruction Accuracy: To numerically evaluate the similarity between the reconstructed hypergraph β ^ ( π ) and the original target hypergraph β ( π ) , we measure the Jaccard similarity, following previous studies on hypergraph reconstruction [13, 6]. The Jaccard similarity π₯ β’ ( β ( π ) , β ^ ( π ) ) between two hypergraphs β ( π ) and β ^ ( π ) is defined as:
π₯ β’ ( β ( π ) , β ^ ( π ) )
| β° β ( π ) β© β° ^ β ( π ) | | β° β ( π ) βͺ β° ^ β ( π ) |
where it evaluates how accurately the unique hyperedges are reconstructed in β ^ ( π ) .
In addition, to account for the multiplicity of the hyperedges, we use the multi-Jaccard similarity[31], which extends the Jaccard similarity by incorporating the multiplicities of hyperedges. The multi-Jaccard similarity π₯ multi β’ ( β ( π ) , β ^ ( π ) ) between β ( π ) and β ^ ( π ) is defined as:
π₯ multi β’ ( β ( π ) , β ^ ( π ) )
β π β β° union min β‘ ( π β ( π ) β’ ( π ) , π β ^ ( π ) β’ ( π ) ) β π β β° union max β‘ ( π β ( π ) β’ ( π ) , π β ^ ( π ) β’ ( π ) )
where β° union
β° β ( π ) βͺ β° ^ β ( π ) . This measure provides a more comprehensive evaluation by considering both the presence and frequency of hyperedges.
Comparison with Existing Problems: While there have been efforts to reconstruct hypergraphs from their projected pairwise graphs, most existing studies have overlooked the importance of edge multiplicity and have limited their input data to unweighted projected graphs. This approach essentially considers only the existence of pairwise edges, disregarding how frequently the constituent nodes co-appear within hyperedges in the original hypergraph.
To the best of our knowledge, hypergraph reconstruction with edge multiplicity has only been explored by Wang and Kleinberg [6]. Specifically, while their main text focuses on supervised hypergraph reconstruction without considering edge multiplicity, their appendix examines an unsupervised approach that incorporates edge multiplicity. However, supervised hypergraph reconstruction with edge multiplicity, which we addressed in this work, has not been explored.
IIIProposed Method
We propose MARIOH, an effective method for supervised multiplicity-aware hypergraph reconstruction (i.e., Problem 1). MARIOH explores cliques in the projected graph as potential hyperedge candidates. It employs a classifier trained on multiplicity-aware features to predict the likelihood of each clique being a hyperedge. For cliques likely to form hyperedges, MARIOH updates the projected graph by removing their constituent edges and incorporates the corresponding sets of nodes as hyperedges in the reconstructed hypergraph.
The core idea of MARIOH is to effectively leverage edge multiplicity information in the projected graph. Specifically, MARIOH benefits from edge multiplicity in two key ways:
β’
Reduced search space: Edge multiplicity significantly reduces the number of potential cases of hyperedge overlaps among the nodes in a given clique. This reduction simplifies the hyperedge identification problem and enhances the efficiency of the overall reconstruction process, as discussed in Sect. I. MARIOH not only utilizes edge multiplicity throughout the reconstruction process but also further reduces the search space by leveraging edge multiplicity to identify size-2 hyperedges that are theoretically guaranteed to be true hyperedges in a preprocessing step.
β’
Clique feature enhancement: Edge multiplicity represents the number of hyperedges in the original hypergraph that overlap across pairs of nodes. When aggregated across all edges in a clique, this information provides a valuable feature for distinguishing whether the clique corresponds to a true hyperedge in the original hypergraph. MARIOH leverages various features derived from edge multiplicity but a simple classifier trained to predict whether a given clique represents a hyperedge (see Sect. III-D).
Moreover, MARIOH incorporates a bidirectional search that explores cliques with both high and low predicted likelihoods of being hyperedges. While it identifies highly probable cliques as hyperedges, it also examines subsets of low-likelihood cliques that might otherwise be overlooked. This comprehensive approach improves the accuracy of hypergraph reconstruction, as empirically demonstrated in Sect. IV.
1 Input : (a) Input projected graph π’
( π± , β° π’ , π ) (b) Multiplicity-aware classifier β³ β (c) Initial classification threshold π init β (d) Negative prediction processing ratio π (%) β (e) Threshold adjust ratio πΌ Output : (a) Reconstructed hypergraph β ^ 2 β ^ β β β· Initialize the reconstructed hypergraph π β π init β· π is the classification threshold for β³ 3 π’ β² , β ^ β Filtering β’ ( π’ , β ^ ) 4 5while π’ β² has edges do 6 π’ β² , β ^ β BidirectionalSearch β’ ( π’ β² , β³ , π , π , β ^ ) 7 π β max β‘ ( π β πΌ Γ π init , β0 ) 8return β ^ 9 Algorithm 1 Overview of MARIOH III-AOverview of MARIOH (Algorithm 1)
MARIOH generates a reconstructed hypergraph β ^ , initialized as an empty set, from a given projected graph π’ . The reconstruction process comprises two main procedures:
1.
Theoretically-Guaranteed Filtering (Sect. III-B): As a preprocessing step, MARIOH identifies edges in π’ that are theoretically guaranteed to correspond to size- 2 hyperedges. These edges are directly incorporated into the reconstructed hypergraph β ^ and removed from π’ , reducing it to an intermediate graph π’ β² .
2.
Bidirectional Search (Sect. III-C): MARIOH examines the maximal cliques in π’ β² as potential hyperedges. Each maximal clique is assessed for its likelihood of being a hyperedge using a classifier trained on multiplicity-aware features. Based on the predicted likelihood with respect to an adjustable threshold, MARIOH identifies not only (1) cliques with high estimated likelihood but also (2) sub-cliques with high likelihood but hidden within larger low-likelihood cliques, as hyperedges, improving the comprehensiveness of the search. This process is iteratively repeated until no edges remain in π’ β² .
Finally, MARIOH completes the process and returns the reconstructed hypergraph β ^ . Below, we provide a detailed description of each step.
III-BTheoretically-Guaranteed Filtering (Algorithm 2)
As a preprocessing step, MARIOH identifies the edges in π’ that are theoretically guaranteed to correspond to true size- 2 hyperedges of the original hypergraph β . These edges are directly incorporated into the reconstructed hypergraph β and removed from π’ , resulting in an intermediate graph π’ β² . This step effectively reduces the search space for subsequent reconstruction steps and allows MARIOH to focus on identifying higher-order (i.e., larger) hyperedges.
1 Input : (a) Input projected graph: π’
( π± , β° π’ , π ) β(b) Reconstructed hypergraph β ^ Output : (a) Intermediate graph π’ β²
( π± , β° π’ β² , π β² ) β(b) Updated reconstructed hypergraph β ^ 2 β° π’ β² β β° π’ and π β² β π β· Initialize the intermediate graph 3 for each edge ( π’ , π£ ) β β° π’ do 4 MHH β’ ( π’ , π£ ) β β π§ β π β’ ( π’ ) β© π β’ ( π£ ) min β‘ ( π π’ , π§ , π π£ , π§ ) 5 π π’ , π£ β π π’ , π£ β MHH β’ ( π’ , π£ ) 6 7 if π π’ , π£ > 0 then 8 β ^ β β ^ βͺ { { π’ , π£ } } π π’ , π£ 9 π π’ , π£ β² β π π’ , π£ β π π’ , π£ 10 if π π’ , π£ β²
0 then 11 β° π’ β² β β° π’ β² β { ( π’ , π£ ) } 12 13return π’ β² , β ^ 14 Algorithm 2 Filtering
For each edge ( π’ , π£ ) β β° π’ in π’ , we define the maximum number of higher-order hyperedges (MHH) as the maximum possible number of higher-order hyperedges (i.e., hyperedges of size 3 or larger) that involve both π’ and π£ . Formally, the MHH of nodes π’ and π£ is defined as:
MHH β’ ( π’ , π£ )
β π§ β π β’ ( π’ ) β© π β’ ( π£ ) min β‘ ( π π’ , π§ , π π£ , π§ ) ,
(1)
where π β’ ( π’ ) and π β’ ( π£ ) represent the set of neighbors of π’ and π£ , respectively. From the definition of MHH, we derive Lemma 1 and Lemma 2, which provide theoretical guarantees for identifying size- 2 hyperedges.
Lemma 1 (Upper Bound of Higher-Order Hyperedges).
The MHH β’ ( π’ , π£ ) is an upper bound on the total number of higher-order hyperedges that include both π’ and π£ .
From Lemma 1, we define the residual edge multiplicity π π’ , π£ between nodes π’ and π£ as follows:
π π’ , π£
π π’ , π£ β MHH β’ ( π’ , π£ ) ,
which directly leads to Lemma 2.
Lemma 2 (Lower Bound on Size-2 Hyperedges).
The residual edge multiplicity π π’ , π£ is a lower bound on the number of true hyperedges in β that exclusively consist of π’ and π£ .
Therefore, if π π’ , π£
0 , we can theoretically and confidently identify { π’ , π£ } as a true size- 2 hyperedge. Such edges are removed from π’ and incorporated into the reconstructed hypergraph β ^ . The proofs of Lemma 1 and Lemma 2 are provided in the online appendix[7].
1 Input : (a) Intermediate graph π’ β²
( π± , β° π’ β² , π β² ) β(b) Multiplicity-aware classifier β³ β(c) Classification threshold π β(d) Reconstructed hypergraph β ^ β(e) Negative prediction processing ratio π (%) Output : (a) Updated intermediate graph π’ β²
( π± , β° π’ β² , π β² ) β(b) Updated reconstructed hypergraph β ^ 2 3 π¬ β All maximal cliques (sets of nodes) in β’ π’ β² 4 π¬ pos β { π β π¬ : β³ β’ ( π ) > π } 5 π¬ neg β { π β π¬ β π¬ pos : Lowest β’ π % β’ w.r.t. β’ β³ β’ ( π ) } 6 β· Phase 1: Most Promising Cliques 7 Sort π¬ pos in descending order of β³ β’ ( π ) 8 9for each clique π β π¬ pos do 10 β° π β { ( π’ , π£ ) : π’ , π£ β π , π’ β π£ } 11 if β° π β β° π’ β² then 12 β ^ β β ^ βͺ { π } 13 π π’ , π£ β² β π π’ , π£ β² β 1 β’ for all β’ ( π’ , π£ ) β β° π 14 β° π’ β² β β° π’ β² β { ( π’ , π£ ) β β° π : π π’ , π£ β²
0 } 15 β· Phase 2: Least Promising Cliques 16 17 π¬ Β― sub β β π β π¬ neg β π
2 | π | β 1 RandomSample β’ ( ( π π ) ) 18 π¬ sub β { π β π¬ Β― sub : β³ β’ ( π ) > π } 19 Sort π¬ sub in descending order of β³ β’ ( π ) 20 21for each subset π sub β π¬ sub do 22 β° π sub β { ( π’ , π£ ) : π’ , π£ β π sub , π’ β π£ } 23 if β° π sub β β° π’ β² then 24 β ^ β β ^ βͺ { π sub } 25 π π’ , π£ β² β π π’ , π£ β² β 1 β’ for all β’ ( π’ , π£ ) β β° π sub 26 β° π’ β² β β° π’ β² β { ( π’ , π£ ) β β° π sub : π π’ , π£ β²
0 } 27 28return π’ β² , β ^ 29 Algorithm 3 BidirectionalSearch III-CBidirectional Search (Algorithm 3)
MARIOH examines the cliques in the intermediate graph π’ β² to determine whether they qualify as hyperedges. It employs a bidirectional search, which not only evaluates cliques with high predicted likelihoods as hyperedges but also investigates sub-cliques within low-likelihood cliques to uncover potential hyperedges that might otherwise be overlooked.
Specifically, MARIOH identifies all maximal cliques π¬ in π’ β² , and for each maximal clique π β π¬ , it uses a classifier β³ , trained on multiplicity-aware features (as detailed in Sect. III-D), to predict the score β³ β’ ( π ) of the clique being a hyperedge. Based on the predicted scores relative to the threshold π , which is adaptively adjusted throughout the process, the maximal cliques are categorized into two subsets: (1) the set of most promising cliques π¬ pos , consisting of cliques with prediction scores above the threshold π (i.e., β³ β’ ( π )
π β’ β π β π¬ pos ); and (2) the set of least promising cliques π¬ neg , consisting of the cliques with the lowest π % prediction scores among π¬ β π¬ pos . The bidirectional search in Algorithm 3 applies distinct approaches to each subset of cliques, as discussed below.
Most Promising Cliques: The cliques with high prediction scores are sequentially added to the reconstructed hypergraph β ^ as hyperedges. Simultaneously, their constituent edges are removed from π’ β² . It is important to note that if a prioritized clique (e.g., { 5 , 6 , 7 } in Fig. 3) is removed from π’ β² , a subsequent clique (e.g., { 2 , 3 , 5 , 6 } in Fig. 3) may no longer exist in the updated intermediate graph π’ β² because its overlapping edges with the former clique have already been removed.
Least Promising Cliques: For cliques with low prediction scores, which are predicted as unlikely to be hyperedges, it is still possible that they contain sub-cliques that are true hyperedges in the original hypergraph β . To identify such sub-cliques, for each clique π β π¬ neg , and for each sub-clique size π β { 2 , β¦ , | π | β 1 } , MARIOH randomly samples sub-cliques from the set of all possible sub-cliques (i.e., ( π π ) ). The resulting set of sampled sub-cliques, denoted as π¬ Β― sub , is formally defined as:
π¬ Β― sub
β π β π¬ neg β π
2 | π | β 1 RandomSample β’ ( ( π π ) ) ,
where RandomSample β’ ( β ) randomly samples an element from the input set. From the set π¬ Β― sub , which contains β π β π¬ neg ( | π | β 2 ) sub-cliques, those with high prediction scores are filtered to produce π¬ sub , defined as:
π¬ sub
{ π β π¬ Β― sub : β³ β’ ( π )
π } .
The sub-cliques in π¬ sub , which are estimated to be true hyperedges, are added to the reconstructed hypergraph β ^ . Simultaneously, the intermediate graph π’ β² is updated by removing the constituent edges of the removed sub-cliques. In addition, identifying sub-cliques within the least promising cliques ensures that these cliques do not reappear in subsequent iterations and thus enhances the efficiency of MARIOH.
Adaptive Threshold Adjustment: The threshold π , which is used to estimate the likelihood of the clique π being a hyperedge based on the predicted score β³ β’ ( π ) , is adaptively decreased over iterations using a threshold adjustment ratio πΌ . Specifically, after each iteration, π is updated as follows:
π β max β‘ ( π β πΌ Γ π init , 0 ) ,
where π init is the initial value of the threshold. This adaptive adjustment allows more candidates to be considered as potential hyperedges in subsequent iterations. By gradually lowering the threshold, the recall of the reconstructed hypergraph is enhanced, as it captures cliques that may have lower prediction scores but are still likely to be hyperedges.
III-DMultiplicity-Aware Classifier
Lastly, we discuss how the classifier β³ , which generates the prediction scores β³ β’ ( π ) for a given clique π , is trained. Specifically, β³ is trained to determine whether a clique π in the source projected graph π’ ( π ) corresponds to a hyperedge in the source hypergraph β ( π ) . The classifier leverages a carefully designed feature representation for each clique, which comprehensively captures edge multiplicity information.
Multiplicity-Aware Clique Features: For a clique π identified in the projected graph, we generate its feature representation. Specifically, we integrate three levels of features: node-level, edge-level, and clique-level features, which account for edge multiplicity.
β’
Node-level features: For each node π’ β π in the clique π , we use its weighted degree (i.e., β π£ β π β’ ( π’ ) π π’ , π£ ).
β’
Edge-level features: For each edge ( π’ , π£ ) β β° π within the clique π , we include (1) its multiplicity (i.e., π π’ , π£ ), (2) the maximum number of higher-order hyperedges (i.e., MHH β’ ( π’ , π£ ) ; see Eq. (1)), and (3) the maximum portion of higher-order hyperedges (i.e., MHH β’ ( π’ , π£ ) / π π’ , π£ ), which quantifies the maximum proportion of the higher-order hyperedges that contribute to the edge multiplicity π π’ , π£ .
β’
Clique-level features: For the clique π , we include (1) its clique size (i.e., | π | ), (2) the clique cut ratio, which measures the proportion of edge multiplicity within the clique relative to the total edge multiplicity connected to nodes in the clique, and (3) a binary indicator specifying whether the clique is maximal in the projected graph π’ (i.e., 1 if π is maximal in π’ , and 0 otherwise).
To summarize node-level and edge-level features into clique-level features, we aggregate the respective node or edge features for each clique. Specifically, for each set of node or edge features, we calculate the sum, average, minimum, maximum, and standard deviation, resulting in a 5 -dimensional vector. These aggregated vectors are then concatenated with the predefined clique-level features to form the final feature representation for the clique. In Sect. IV-E, we empirically demonstrate that these features derived from edge multiplicity are more effective and informative than other potentially feasible clique feature representations.
The classifier β³ is implemented as a simple MLP. For further details about β³ , including its negative sampling strategy, refer to the online appendix [7].
III-EComplexity Analysis
We analyze the time and space complexity of MARIOH. All proofs can be found in the online appendix[7].
Time Complexity: We first analyze the time complexity of MARIOH when applied to the graph π’
( π± , β° π’ , π ) . Specifically, we analyze the complexities of the Filtering step (Algorithm 2) and BidirectionalSearch step (Algorithm 3). For the complexity of the negative sampling step for training the classifier β³ , refer to [7]. The empirical runtimes of MARIOH are presented in Sect. IV-F.
Lemma 3 (Time Complexity of Algorithm 2).
The time complexity of the Filtering step is π β’ ( | β° π’ | β π max ) , where π max is the maximum degree of π’ , i.e., π max
max π’ β π± β‘ | π β’ ( π’ ) | .
Assuming that π max is a constant, the Filtering step exhibits a practical time complexity of π β’ ( | β° π’ | ) .
Lemma 4 (Time Complexity of Algorithm 3).
The time complexity of the BidirectionalSearch step is π β’ ( π β’ log β‘ | π | ) , where π
β πΆ β π π πΆ . Here, π is the set of all cliques in π’ , and π πΆ
min ( π’ , π£ ) β πΆ β‘ π π’ , π£ is the minimum edge multiplicity among the edges in clique πΆ .
Space Complexity: MARIOH requires π β’ ( | π± | + | β° π’ | ) space for the input projected graph, and it requires extra space proportional to the sum of sizes (i.e., node counts) of the cliques processed in each iteration of the BidrectionalSearch step, which is a subset of all cliques in π’ .
IVExperiments
We perform experiments to answer the following questions:
Q1.
Accuracy. How accurately does MARIOH reconstruct the projected graph into hypergraphs? How effectively do the reconstructed hypergraphs preserve the structural properties of the original hypergraph?
Q2.
Transferability. Can MARIOH, trained on one dataset, reconstruct other hypergraphs from the same data domain?
Q3.
Applicability. How much does hypergraph reconstruction by MARIOH impact downstream task performance?
Q4.
Effectiveness. Is every component of MARIOH effective? Which clique features are most important? Is MARIOH robust to changes in its hyperparameters?
Q5.
Scalability. How fast is MARIOH, compared to competitors? How does its running time scale with the graph size?
IV-AExperimental Settings TABLE I:Summary of the datasets. We report the number of nodes ( | π± | ), the number of hyperedges ( | β° β | ), and the average hyperedge multiplicity ( Avg. β’ π β ) in the original hypergraph β , as well as the number of edges (| β° π’ |) and the average edge multiplicity ( Avg. β’ π ) in the corresponding projected graph π’ . Nodes Hyperedges in β Edges in π’
Dataset | π± |
| β° β |
Avg. β’ π β
| β° π’ |
Avg. β’ π
Enron [3] 141 889 5.85 5,205 9.18 P. School [3] 238 7,975 6.90 55,043 11.98 H. School [3] 318 4,254 17.01 72,369 22.24 Crime [13] 308 105 1.01 106 1.03 Hosts [13] 449 159 1.06 168 1.24 Directors [13] 513 101 1.01 102 1.02 Foursquare [13] 2,254 873 1.00 873 1.02 DBLP [3] 389,330 213,328 1.10 235,498 1.28 Eu [3] 891 6,805 1.26 8,581 4.62 MAG-TopCS [32] 48,742 25,945 1.00 25,945 1.14 TABLE II:Reconstruction accuracy in the multiplicity-reduced setting. MARIOH demonstrates the highest Jaccard similarity (scaled by a factor of 100) compared to the baseline methods. The best performance is shown in bold, and the second best is underlined. "OOT" indicates "out of time" (exceeding 24 hours), and "OOM" represents "out of memory" (system limit: 384 GB). Method Enron P.School H.School Crime Hosts Directors Foursquare DBLP Eu MAG-TopCS CFinder 0.00 Β± 0.00 0.00 Β± 0.00 0.00 Β± 0.00 24.04 Β± 0.00 2.50 Β± 0.00 41.18 Β± 0.00 7.81 Β± 0.00 21.50 Β± 0.00 0.01 Β± 0.00 25.34 Β± 0.00 Demon 2.43 Β± 0.00 0.09 Β± 0.00 2.97 Β± 0.00 76.27 Β± 0.48 9.21 Β± 0.62 90.14 Β± 1.39 17.01 Β± 0.35 49.05 Β± 0.02 0.01 Β± 0.00 24.75 Β± 0.05 Max Clique 4.31 Β± 0.00 0.09 Β± 0.00 2.38 Β± 0.00 92.82 Β± 0.00 23.19 Β± 0.00 100.00 Β± 0.00 9.55 Β± 0.00 84.51 Β± 0.00 0.98 Β± 0.00 82.12 Β± 0.00 Clique Covering 6.84 Β± 0.00 1.95 Β± 0.00 6.89 Β± 0.00 93.24 Β± 0.00 54.19 Β± 0.00 100.00 Β± 0.00 93.69 Β± 0.00 83.87 Β± 0.00 7.11 Β± 0.00 84.27 Β± 0.00 Bayesian-MDL 4.77 Β± 0.26 0.18 Β± 0.02 3.57 Β± 0.05 93.15 Β± 0.20 53.10 Β± 0.32 100.00 Β± 0.00 80.65 Β± 1.06 86.06 Β± 0.01 4.76 Β± 0.03 87.26 Β± 0.02 SHyRe-Unsup 13.74 Β± 0.00 8.34 Β± 0.00 17.00 Β± 0.00 94.86 Β± 0.00 50.38 Β± 0.00 100.00 Β± 0.00 92.27 Β± 0.00 OOT 5.19 Β± 0.00 94.31 Β± 0.00 SHyRe-Motif 14.14 Β± 3.27 OOT 54.21 Β± 0.31 92.82 Β± 0.00 49.69 Β± 0.59 100.00 Β± 0.00 70.92 Β± 6.65 85.99 Β± 0.04 OOM 86.04 Β± 0.11 SHyRe-Count 14.36 Β± 0.50 42.87 Β± 1.66 54.19 Β± 0.22 92.82 Β± 0.00 56.64 Β± 0.55 100.00 Β± 0.00 85.96 Β± 0.06 86.07 Β± 0.00 11.14 Β± 0.27 87.14 Β± 0.09 MARIOH-M 19.48 Β± 0.45 47.04 Β± 0.56 55.69 Β± 0.18 93.65 Β± 0.30 56.17 Β± 2.64 100.00 Β± 0.00 94.03 Β± 3.04 95.86 Β± 0.44 11.85 Β± 0.05 96.57 Β± 9.24 MARIOH-F 22.95 Β± 0.92 47.26 Β± 0.64 57.66 Β± 0.40 96.33 Β± 7.35 57.47 Β± 3.46 100.00 Β± 0.00 97.27 Β± 1.80 97.82 Β± 0.11 12.48 Β± 0.41 97.01 Β± 0.12 MARIOH-B 21.23 Β± 1.79 9.59 Β± 0.38 18.04 Β± 0.32 100.00 Β± 0.00 60.08 Β± 5.34 100.00 Β± 0.00 100.00 Β± 0.00 98.25 Β± 0.02 11.87 Β± 0.13 98.20 Β± 0.09 MARIOH 25.06 Β± 0.60 47.48 Β± 0.65 57.75 Β± 0.29 100.00 Β± 0.00 61.52 Β± 2.63 100.00 Β± 0.00 99.19 Β± 0.34 98.13 Β± 0.06 13.98 Β± 0.23 98.04 Β± 0.12 TABLE III:Reconstruction accuracy in the multiplicity-preserved setting. MARIOH demonstrates the highest multi-Jaccard similarity (scaled by a factor of 100) compared to the baseline methods. The best performance is shown in bold, and the second best is underlined. "OOT" indicates "out of time" (exceeding 24 hours). Method Enron P.School H.School Crime Hosts Directors Foursquare DBLP Eu MAG-TopCS Bayesian-MDL 3.51 Β± 0.16 0.11 Β± 0.02 2.38 Β± 0.11 95.06 Β± 0.40 56.25 Β± 1.29 100.00 Β± 0.00 80.18 Β± 0.76 85.10 Β± 0.01 4.92 Β± 0.05 87.36 Β± 0.03 SHyRe-Unsup 43.45
Β± 0.00 25.30
Β± 0.00 56.76
Β± 0.00 100.00 Β± 0.00 12.39
Β± 0.00 100.00 Β± 0.00 97.97
Β± 0.00 OOT 5.62 Β± 0.00 93.51 Β± 0.00 MARIOH-M 48.89 Β± 0.40 51.49 Β± 0.47 68.66 Β± 0.21 91.84 Β± 2.11 50.32 Β± 12.86 100.00 Β± 0.00 84.49 Β± 1.74 92.84 Β± 1.78 10.63 Β± 0.75 95.73 Β± 0.12 MARIOH-F 48.37 Β± 0.72 53.26 Β± 0.80 69.29 Β± 0.25 96.33 Β± 0.00 47.16 Β± 16.38 93.91 Β± 6.31 96.24 Β± 4.51 97.08 Β± 0.03 10.18 Β± 0.54 96.18 Β± 0.10 MARIOH-B 47.61 Β± 0.19 27.81 Β± 0.10 58.14 Β± 0.05 100.00 Β± 0.00 54.63 Β± 19.68 100.00 Β± 0.00 99.01 Β± 0.99 97.49 Β± 0.07 9.88 Β± 0.41 97.76 Β± 0.06 MARIOH 52.26 Β± 1.54 53.48 Β± 1.42 69.85 Β± 0.21 100.00 Β± 0.00 58.41 Β± 15.22 100.00 Β± 0.00 98.88 Β± 0.46
97.48 Β± 0.05 11.55 Β± 0.52 97.62 Β± 0.03
Datasets: We use 10 datasets from various domains, summarized in Table I. For each hypergraph β
( π± , β° β β ) , we split its hyperedges into halves, β° β β ( π ) and β° β β ( π ) , to construct the source hypergraph β ( π )
( π± ( π ) , β° β β ( π ) ) and the target hypergraph β ( π )
( π± ( π ) , β° β β ( π ) ) . Specifically, for datasets where timestamps of each hyperedge are available, the hyperedges are split into halves based on their timestamps; otherwise, they are randomly split. The source hypergraph β ( π ) is projected into π’ ( π ) which is used to train MARIOH.
To align with the experimental setups of previous studies [13, 6], we use multiplicity-reduced hypergraphs, where the multiplicities of all hyperedges are reduced to 1 , i.e., π β β’ ( π )
1 β’ β π β β° β , unless stated otherwise. Note that this does not reduce the edge multiplicities in the projected graph to 1 . In addition, we shall also evaluate MARIOH on multiplicity-preserved hypergraphs, where hyperedge multiplicities are not reduced.
Baselines: We compare MARIOH with seven baseline methods across three categories: (1) overlapping community detection-based methods (Demon [33] and CFinder [34]), (2) clique decomposition-based methods (Clique Covering [35] and Max Clique [36]), (3) hypergraph reconstruction methods (SHyRe-Count [6], SHyRe-Motif [6], SHyRe-Unsup [6], and Bayesian-MDL [13]). Refer to[7] for detailed descriptions of each method.
Implementation: We implemented MARIOH, Demon, CFinder, and SHyRe-Unsup in Python. For the hyperparameters required in each method, we set: MARIOH with πΌ
1 / 20 unless otherwise specified, Demon with a minimum community size of 2 and π
1 , and CFinder with the optimal π selected within the [ 0.1 , 0.5 ] quantile range of the hyperedge sizes. For Clique Covering, Max Clique, and Bayesian-MDL we used their official libraries in graph-tools with default hyperparameters. For SHyRe, we used hyperparameters recommended by the authors in the original paper. To ensure fairness, the same maximal clique detection algorithm was used across all methods.
Machines: All experiments are conducted on a server with dual 2.4GHz Intel Xeon Silver 4210R processors, 384GB memory, and 4 NVIDIA RTX A6000 GPUs.
IV-BQ1. Accuracy
Firstly, and most importantly, we evaluate the reconstruction quality of MARIOH against its baselines.
Reconstruction Accuracy: In Table II, we measure the Jaccard similarity between the sets of hyperedges in the target hypergraph and those in the reconstructed hypergraph for the multiplicity-reduced setting. In Table III, we measure the multi-Jaccard similarity for reconstructing multiplicity-preserved hypergraphs, where hyperedge multiplicities are considered. Notably, only some baselines are applicable for reconstructing multiplicity-preserved hypergraphs. In both settings, MARIOH consistently and significantly outperforms its competitors across all datasets. For example, in the multiplicity-reduced setting, on the Enron dataset, MARIOH yields a reconstructed hypergraph with a Jaccard similarity that is 74.51% higher than that of SHyRe-Count. Similarly, in the multiplicity-preserved setting, on the P.School dataset, MARIOH attains a multi-Jaccard similarity that is 111.38% higher than that of SHyRe-Unsup.
Case Studies: In Fig. 2, we present the results on the DBLP dataset, focusing on a sub-hypergraph induced by Jure Leskovec and his ten randomly selected co-authors. Specifically, the co-authorship hypergraph in 2015 is used for training to reconstruct that in 2017. As shown in the figure, MARIOH exactly restores the sub-hypergraph from its projected graph (Jaccard and multi-Jaccard similarity
1.000 ), while SHyRe-Count [6] recovers only a subset of ground-truth hyperedges along with some false positives (Jaccard similarity
0.333 ). Refer to the online appendix[7] for more case studies on the HostβVirus and Crime datasets.
TABLE IV:Preservation of structural properties. MARIOH reconstructs hypergraphs with the lowest preservation error (lower is better) across 12 scalar and distributional structural properties. This indicates MARIOHβs strong capability to preserve the structural properties of the original hypergraph. The best performance is shown in bold, and the second best is underlined. Structural Properties Bayesian-MDL SHyRe-Count SHyRe-Motif SHyRe-Unsup MARIOH Scalar Properties Number of Nodes 0.000 Β± 0.000 0.013 Β± 0.033 0.007 Β± 0.017 0.000 Β± 0.000 0.000 Β± 0.000 Number of Hyperedges 0.279 Β± 0.339 0.180 Β± 0.235 0.166 Β± 0.211 0.167 Β± 0.198 0.087 Β± 0.134 Average Node Degree 0.178 Β± 0.247 0.151 Β± 0.236 0.161 Β± 0.227 0.095 Β± 0.149 0.044 Β± 0.070 Average Hyperedge Size 0.187 Β± 0.223 0.085 Β± 0.086 0.087 Β± 0.091 0.089 Β± 0.093 0.060 Β± 0.079 Simplicial Closure Ratio 0.151 Β± 0.314 0.178 Β± 0.372 0.179 Β± 0.368 0.139 Β± 0.236 0.107 Β± 0.186 Hypergraph Density 0.279 Β± 0.339 0.175 Β± 0.224 0.163 Β± 0.205 0.167 Β± 0.198 0.087 Β± 0.134 Hypergraph Overlapness 0.178 Β± 0.247 0.151 Β± 0.236 0.161 Β± 0.227 0.095 Β± 0.149 0.044 Β± 0.070 Distributional Properties Node Degree 0.154 Β± 0.235 0.118 Β± 0.196 0.128 Β± 0.192 0.081 Β± 0.155 0.033 Β± 0.055 Node-Pair Degree 0.059 Β± 0.073 0.087 Β± 0.133 0.101 Β± 0.128 0.011 Β± 0.020 0.025 Β± 0.048 Node-Triple Degree 0.052 Β± 0.071 0.155 Β± 0.373 0.041 Β± 0.049 0.016 Β± 0.027 0.015 Β± 0.030 Hyperedge Homogeneity 0.151 Β± 0.118 0.213 Β± 0.229 0.241 Β± 0.230 0.055 Β± 0.068 0.073 Β± 0.104 Singular Values 0.035 Β± 0.036 0.098 Β± 0.124 0.102 Β± 0.118 0.034 Β± 0.038 0.034 Β± 0.038 Average (Overall) 0.142 Β± 0.090 0.134 Β± 0.055 0.128 Β± 0.064 0.079 Β± 0.058 0.051 Β± 0.032
Structure Preservation: For a more comprehensive evaluation, we evaluate whether the reconstructed hypergraphs preserve the core structural properties of the original hypergraph. Specifically, we compare the reconstructed hypergraph β ^ ( π ) and the original hypergraph β ( π ) based on 12 structural properties, categorized into scalar and distributional properties. Scalar properties include the number of nodes and hyperedges, average node degree, average hyperedge size, simplicial closure ratio [3], hypergraph density [37], and hypergraph overlapness [38]. Distributional properties include the distributions of node degrees, node-pair degrees, node-triple degrees, hyperedge homogeneity [38], and singular values of the incidence matrix. For more detailed descriptions of each structural property, refer to[7].
For the scalar properties, we quantify the difference by the normalized difference, spec., | π₯ β π¦ | / max β‘ ( π₯ , π¦ ) where π₯ and π¦ are the values of the ground truth and the reconstruction, respectively, following [6]. For the distributional properties, we compare the two distributions using the Kolmogorov-Smirnov (KS) D-statistic, which quantifies the maximum difference between their cumulative distribution functions. For both metrics, lower values indicate better preservation of the corresponding structural property.
As shown in Table IV, MARIOH achieves the lowest values across all scalar properties and three out of five distributional properties. This demonstrates MARIOHβs strong capability to reconstruct hypergraphs that accurately preserve the structural properties in the original hypergraphs.
TABLE V:Transfer learning performance. Each method is trained on a source dataset and evaluated on a target dataset. MARIOH achieves the highest Jaccard similarity (scaled by a factor of 100) compared to the baselines. The best performance is shown in bold, and the second best is underlined. "OOT" indicates "out of time" (exceeding 24 hours), and "OOM" represents "out of memory" (system limit: 384 GB). Source Dataset DBLP Eu P.School Target Dataset DBLP MAG-History MAG-TopCS MAG-Geology Eu Enron P.School H.School SHyRe-Unsup OOT OOT 84.50 Β± 0.00 OOT 5.19 Β± 0.00 13.74 Β± 0.00 8.34 Β± 0.00 17.00 Β± 0.00 SHyRe-Motif 85.99 Β± 0.04
76.73 Β± 0.52 OOT OOT OOM OOM OOT OOT SHyRe-Count 86.07 Β± 0.00 93.86 Β± 0.34 78.08 Β± 0.36 48.64 Β± 0.31 11.14 Β± 0.27 23.08 Β± 0.08 42.87 Β± 1.66 43.83 Β± 0.28 MARIOH 98.13 Β± 0.06 97.80 Β± 0.57 92.92 Β± 0.22 81.66 Β± 0.73 13.98 Β± 0.23 27.59 Β± 2.50 47.48 Β± 0.65 49.52 Β± 2.09 IV-CQ2. Transferability
We assess the transferability of MARIOH by evaluating its performance in two distinct scenarios: (1) transfer setting where MARIOH is trained on a source dataset and then used to reconstruct a hypergraph in a different target dataset and (2) semi-supervised setting where MARIOH is trained on a source dataset with limited supervision. Both settings aim to evaluate MARIOHβs adaptability and resilience under varying levels of data availability and domain shifts.
Transfer Learning: In this setting, MARIOH is trained on source datasets and applied to target datasets within similar domains. As shown in Table V, MARIOH and its competitors are trained on the DBLP, Eu, P.School datasets and then used to reconstruct hypergraphs on target datasets within similar domains but with potentially different structures. MARIOH consistently and significantly outperforms the baselines in transfer learning across three domains. For example, MARIOH achieves a 67.88% higher Jaccard similarity compared to SHyRe-Count. These results demonstrate the adaptability of MARIOH and its strong ability to generalize across domains.
TABLE VI:Semi-supervised learning performance. MARIOH maintains high reconstruction accuracy (in terms of Jaccard similarity scaled 100), even with reduced training ratios. The best performance is shown in bold, and the second best is underlined. Method DBLP Hosts Enron Bayesian-MDL 86.06 Β± 0.01 53.10 Β± 0.32 4.77 Β± 0.26 SHyRe-Motif 85.99 Β± 0.04 49.69 Β± 0.59 14.14 Β± 3.27 SHyRe-Count 86.07 Β± 0.00 56.64 Β± 0.55 14.36 Β± 0.50 MARIOH (10%) 96.52 Β± 0.22 55.42 Β± 2.17 22.29 Β± 1.78 MARIOH (20%) 97.11 Β± 0.15 56.24 Β± 1.46 22.45 Β± 1.14 MARIOH (50%) 97.88 Β± 0.08 58.17 Β± 2.74 24.10 Β± 1.32 MARIOH (100%) 98.13 Β± 0.06 61.52 Β± 2.63 25.06 Β± 0.60
Semi-Supervised Learning: We evaluate MARIOH using only 10%, 20%, and 50% of the available supervision (i.e., hyperedges in the source hypergraph β ( π ) ). As shown in Table VI, MARIOH demonstrates robust performance, maintaining high Jaccard similarity even with reduced training ratios. Notably, DBLP and Enron, MARIOH outperforms its baselines that are trained with full supervision, even when trained with only 10% of the supervision. This demonstrates the robustness of MARIOH against limited training data.
IV-DQ3. Applicability
We evaluate the utility of the hypergraphs reconstructed by MARIOH in downstream tasks, specifically focusing on node clustering and link prediction.
Node Clustering: We perform node clustering on the P.School and H.School datasets, both of which include node labels, allowing for numerical evaluation. Specifically, we apply spectral clustering to the projected graphs, reconstructed hypergraphs, and ground-truth hypergraphs. Clustering performance is measured using Normalized Mutual Information (NMI). As shown in Table VII, MARIOH achieves the highest NMI scores among all reconstructed hypergraphs. In addition, it is worthwhile to note that, ground-truth hypergraph delivers the best performance, significantly outperforming their projected graphs, implying the importance of higher-order information.
Node Classification: We evaluate the effectiveness of our method on a node classification task using datasets with available node labels. Specifically, we generate node embeddings via spectral decomposition of Laplacian matrices derived from both the weighted projected graphs (obtained through clique expansion) and the original (or reconstructed) hypergraphs. These embeddings serve as input features for an MLP classifier. For a fair comparison, experiments are conducted over multiple random train/test splits and performance is quantified using Micro and Macro F1 scores.
Our experimental results, presented in Table VIII, demonstrate that embeddings computed from the hypergraph Laplacian consistently outperform those derived from the projected graphs. This indicates that preserving higher-order relationships within the hypergraph structure significantly enhances the discriminative power of the node representations.
Link Prediction: For the link prediction task, we utilize the same 10 datasets previously employed to evaluate reconstruction accuracy. Each edge in the graph πΊ is paired with an equal number of randomly sampled non-edges to form a balanced dataset. This dataset is split into 90% for training and 10% for testing to prevent information leakage. To eliminate bias, any hyperedge in the reconstructed hypergraph β ^ that contains a test edge is excluded, as shared membership in a hyperedge inherently indicates a link. Test edges are removed during training and thus do not affect node embedding.
In projected graph settings, we use the Jaccard index, Adamic-Adar index, preferential attachment score, resource allocation score, (mean, minimum, and maximum of) node degrees, and edge weights as edge features. In hypergraph settings, we use the same features derived from projected graphs and two extra hypergraph-specific features: the hyperedge Jaccard index1 and the hyperedge size.2 We also compute link embeddings as additional features by pooling the embeddings of the two incident nodes generated by a two-layered Graph Convolutional Network (GCN) on the (projected) graph with one-hot encodings as the initial node features. The pooling operation is performed by concatenating the element-wise minimum and maximum of the node embeddings. We apply this approach consistently across both projected graph and hypergraph settings. To ensure fairness, all methods use the same GCN architecture for node embedding generation, with identical training configurations.
As shown in Table IX, where AUC is used as the evaluation metric, hypergraph-based approaches consistently outperform projected graph-based approaches. Moreover, MARIOH leads to the highest AUC among all reconstruction methods. Using projected graphs performs competitively on some datasets like H.School and DBLP but underperforms on others, such as Foursquare and Hosts, due to its inability to capture higher-order interactions. Overall, these results support the usefulness of hypergraphs, especially those reconstructed by MARIOH, in leveraging higher-order information for downstream tasks.
TABLE VII:Node clustering performance. Spectral clustering on hypergraphs reconstructed by MARIOH achieves higher NMI than those reconstructed by the baselines. The best performance is highlighted in bold, and the second best is underlined. Input of Spectral Clustering P.School H.School Projected graph π’ 0.8488 0.9392
β ^ by SHyRe-Unsup 0.8982 0.9635
β ^ by SHyRe-Motif OOT 0.9830
β ^ by SHyRe-Count 0.9095 0.9874
β ^ by MARIOH 0.9234 0.9936 Original Hypergraph β 0.9255 0.9936 TABLE VIII:Node classification performance. We evaluate micro-F1 and macro-F1 scores on the P.School and H.School datasets. Hypergraphs reconstructed by MARIOH tend to lead to higher F1 scores than those reconstructed by the baselines. The best performance is highlighted in bold, and the second-best is underlined. Input of Node Classification Micro-F1 Macro-F1 P.School H.School P.School H.School Projected graph π’ 0.8380 0.8890 0.7867 0.8605
β ^ by SHyRe-Unsup 0.8959 0.9720 0.8477 0.9687
β ^ by SHyRe-Motif OOT 0.9780 OOT 0.9744
β ^ by SHyRe-Count 0.8760 0.9768 0.8322 0.9736
β ^ by MARIOH 0.9140 0.9793 0.8594 0.9765 Original Hypergraph β 0.9240 0.9829 0.8720 0.9800 TABLE IX:Link prediction performance. Using hypergraphs reconstructed by MARIOH tends to result in higher accuracy (in terms of AUC) compared to those reconstructed by the baselines. Average AUC scores across five random seeds are reported with standard deviation. The best performance is shown in bold, and the second best is underlined. "OOT" indicates "out of time" (exceeding 24 hours). Input Enron P.School H.School Crime Hosts Directors Foursquare DBLP Eu MAG-TopCS Avg. Rank Projected graph π’ 83.94 Β± 2.99 90.76 Β± 0.37 92.51 Β± 1.37 78.32 Β± 1.11 98.61 Β± 0.06 81.90 Β± 3.10 99.44 Β± 0.02 96.54 Β± 0.09 97.46 Β± 0.06 94.86 Β± 0.22 3.90 Β± 2.33
β ^ by SHyRe-Unsup 80.07 Β± 4.26 91.21 Β± 0.31 90.08 Β± 1.40 81.38 Β± 0.80 98.75 Β± 0.05 82.59 Β± 3.80 99.60 Β± 0.02 OOT 97.18 Β± 0.10 95.05 Β± 0.22 3.50 Β± 1.51
β ^ by SHyRe-Motif 78.65 Β± 3.73 OOT 89.92 Β± 1.64 81.54 Β± 0.94 98.76 Β± 0.03 82.59 Β± 3.80 99.57 Β± 0.02 96.40 Β± 0.11 OOT 94.94 Β± 0.14 4.40 Β± 1.58
β ^ by SHyRe-Count 77.76 Β± 3.09 91.04 Β± 0.77 90.16 Β± 1.57 81.54 Β± 0.94 98.77 Β± 0.11 82.58 Β± 3.80 99.57 Β± 0.01 96.40 Β± 0.04 95.82 Β± 0.61 94.95 Β± 0.19 3.80 Β± 1.32
β ^ by MARIOH 79.75 Β± 4.74 91.72 Β± 0.22 90.21 Β± 1.51 81.60 Β± 1.00 98.68 Β± 0.06 82.75 Β± 3.71 99.60 Β± 0.02 96.46 Β± 0.06 97.19 Β± 0.09 95.10 Β± 0.17 2.30 Β± 1.42 Original Hypergraph β 84.55 Β± 3.20 92.16 Β± 0.19 90.35 Β± 1.18 81.59 Β± 0.82 98.70 Β± 0.06 82.58 Β± 3.80 99.60 Β± 0.02 96.39 Β± 0.06 97.26 Β± 0.13 95.06 Β± 0.14 2.40 Β± 1.43 IV-EQ4. Effectiveness
We evaluate the effectiveness of each component, the importance of multiplicity-aware features, and the robustness of MARIOH to changes in its hyperparameters through ablation studies and sensitivity analysis.
Effects of Multiplicity-Aware Classifier: We evaluate the impact of removing multiplicity-aware features from MARIOH by replacing its features with those from SHyRe-Count [6], a baseline without multiplicity-aware features. This modified version, denoted as MARIOH-M, consistently underperforms compared to MARIOH across all datasets, as shown in Tables II and III. This highlights the importance of multiplicity-aware features for reconstruction accuracy.
Nevertheless, MARIOH-M outperforms SHyRe-Count on most datasets (except Hosts), demonstrating that MARIOHβs candidate search strategy remains more effective even without multiplicity-aware features. This study supports the effectiveness of two key components of MARIOH: its candidate search mechanism and multiplicity-aware clique features.
Effects of Filtering: We assess the impact of filtering in MARIOH by comparing its performance with a version without filtering, denoted as MARIOH-F. As shown in Tables II and III, MARIOH consistently outperforms MARIOH-F across all datasets, highlighting the importance of filtering for improved reconstruction accuracy. Even without filtering, MARIOH-F surpasses other baselines, as size-2 hyperedges, guaranteed by theory, remain in the candidate space. Over iterations, these candidates are assigned high prediction scores and eventually reconstructed as hyperedges.
Effects of Bidirectional Search: We assess the impact of bidirectional search in MARIOH by modifying the reconstruction process to skip sub-clique evaluation for low-scoring candidate cliques, creating a variant called MARIOH-B. As shown in Tables II and III, MARIOH-B shows varied performance across datasets. For example, it performs significantly worse than SHyRe-Count on P.School and H.School, emphasizing the importance of bidirectional search in these cases. However, on other datasets, MARIOH-B outperforms all baselines and even surpasses MARIOH on MAG-TopCS.
This unexpected result on MAG-TopCS suggests that skipping sub-clique evaluation can preserve large candidate cliques that align better with true hyperedges in certain datasets, indicating that bidirectional search, while typically beneficial, may over-prune candidates in specific cases.
Figure 4:Hyeprparameter sensitivity analysis. MARIOH is robust to variations in hyperparameters: πΌ , π , and π init , in both multiplicity-reduced setting (above) and multiplicity-preserved setting (below).
Parameter Sensitivity: We assess the sensitivity of MARIOH to its key hyperparameters, including the initial classification threshold ( π init ), the negative prediction processing ratio ( π ), and the threshold adjust ratio ( πΌ ). Specifically, hyperparameters are explored from the ranges: π π‘
[ 0.5 , 0.6 , β¦ , 1.0 ] , π
[ 5 , 10 , β¦ , 100 ] , and πΌ
[ 1 5 , 1 15 , 1 25 , 1 35 ] . As shown in Fig. 4, MARIOH maintains robust performance across most datasets, with consistent reconstruction accuracy under varying settings. However, in the Hosts dataset, changes in πΌ and π cause some fluctuations in the Jaccard and multi-Jaccard similarities, suggesting that datasets with specific structural characteristics may require more careful hyperparameter tuning for optimal results.
IV-FQ5. Scalability
We compare the runtime of MARIOH with competing methods. As shown in Fig. 5, MARIOH is faster than community-based methods like CFinder and Demon. While clique decomposition methods run the fastest, their reconstruction accuracy is much lower (Tables II and III). Among hypergraph reconstruction methods, Bayesian-MDL is the fastest, followed closely by MARIOH. SHyRe-Count is slightly slower than MARIOH on average (see detailed comparisons below), while SHyRe-Motif and SHyRe-Unsup show significantly longer execution times.
We provide a detailed comparison of MARIOH and SHyRe-Count across all 10 real-world hypergraph datasets. Regarding reconstruction accuracy, as summarized in Tables II and III, MARIOH consistently achieves higher Jaccard and multiset Jaccard similarities across all the datasets. This improvement stems from the Filtering and Bidirectional Search steps in MARIOH, which iteratively refine and discover new candidate cliques, whereas SHyRe-Count classifies only a fixed set of candidates. However, these steps come at the cost of speed, as shown in Fig. 6, where MARIOH generally takes more time than SHyRe-Count on most datasets, except for the largest DBLP dataset.
Figure 5:Average runtime of MARIOH and competitors. While MARIOH is slower than basic baselines, it takes less time on average than recent advanced methods (i.e., SHyRe-Motif and SHyRe-Count). For runtimes on individual datasets, see Fig. 6 and the online appendix [7].
Additionally, we evaluate the scalability of MARIOH on datasets of varying sizes, which are generated using HyperCL [38] with DBLP dataset statistics as input. Note that for all these datasets, the original DBLP dataset is used for training, and thus the training time is independent of the dataset size. As shown in Fig. 7, the running times of both the Filtering and Bidirectional Search steps of MARIOH scale nearly linearly (with a slope close to 1 on a log-log scale) as the number of edges in the input graph increases. This result demonstrates that MARIOH is practically useful for large-scale datasets.
Extra Experiments: In the online appendix [7], we present analyses of (1) feature importance, (2) storage savings through hypergraph reconstruction, and (3) runtimes on individual datasets, in addition to more case studies.
Figure 6:Runtime comparison of MARIOH and SHyRe-Count. The left bar (solid color) for each dataset corresponds to SHyRe-Count, while the right bar (hatched) corresponds to MARIOH. Each segment within a bar represents the proportion of the execution time of the corresponding step relative to the total execution time. The inference step in SHyRe-Count corresponds to the filtering and bidirectional-search steps in MARIOH. VRelated Work
In this section, we review related studies on hypergraph reconstruction. Then, we discuss the broader connections of our work to diverse database studies.
Hypergraph Reconstruction: Hypergraph reconstruction methods seek to recover higher-order interactions from projected graphs. Young et al.[13] proposed a Bayesian approach (Bayesian-MDL) that reconstructs hypergraphs using a Bayesian generative model, emphasizing the principle of parsimony to infer the simplest hypergraph that explains the observed pairwise data. It calculates the posterior probability of a hypergraph and estimates structures through Markov Chain Monte Carlo sampling. However, this assumption of parsimony often falls short in practice[6]. Wang and Kleinberg [6] introduced SHyRe, a supervised method that reconstructs hypergraphs by sampling cliques from the projected graph based on a statistical distribution π β’ ( π , π ) , which captures patterns of hyperedges within maximal cliques. A classifier is trained to distinguish between hyperedges and non-hyperedges. Specifically, SHyRe-Count uses basic structural features, while SHyRe-Motif incorporates subgraph patterns such as triangle and square motifs as additional features for classification. However, its reliance on sampling leads to missed hyperedges (false negatives) and ignores edge multiplicity. To address edge multiplicity, they also proposed an unsupervised method (SHyRe-Unsup) that reconstructs hypergraphs by iteratively selecting maximal cliques as candidates for hyperedges. The cliques are ranked based on their size and average edge multiplicity, preferring larger cliques with lower average edge multiplicities. Once selected, a clique is converted into a hyperedge, and the corresponding edge multiplicities in the projected graph are reduced until all edge multiplicities are 0. However, this approach struggles with scalability, as it involves repeated intensive computations, and its reconstruction accuracy sometimes falls behind SHyRe, revealing limitations in both existing methods for effective hypergraph reconstruction.
Connections to Database Research: Database research addresses data enhancement tasks, including data integration[39, 40, 41], record linkage[42, 43, 44], and data cleaning[45, 46]. In these tasks, uncovering associations among data elements is crucial, even when the available data is incomplete or simplified. For instance, data integration tackles the challenge of merging disparate data sources by identifying underlying relationships. In record linkage, detecting connections is essential for accurately matching records referring to the same entity. Similarly, in data cleaning, recovering relationships is important to improve data consistency and quality. Hypergraph reconstruction can also be considered a data enhancement task, where restoring higher-order relationships enhances the data quality for subsequent analysis. Moreover, because hypergraph representations often offer storage savings over graph representations (see the online appendix [7] for a detailed storage savings analysis), Hypergraph restoration can also be seen as a data compression task, particularly related to lossy graph compression [47, 48, 49, 50].
(a)Filtering Step (b)Bidirectional Search Step Figure 7: Scalability analysis. MARIOH scales nearly linearly with the data size (spec., the number of edges in the projected graph). VIConclusion
This work presents MARIOH, a supervised, multiplicity-aware hypergraph reconstruction method that addresses the challenges of recovering hypergraphs from projected graphs. In essence, MARIOH incorporates a bidirectional greedy search and multiplicity-aware features to enhance reconstruction efficiency and accuracy, leading to the following strengths:
β’
Accurate Recovery: Achieving up to 74.51% higher accuracy than existing methods, MARIOH effectively reconstructs hypergraphs with and without hyperedge multiplicity.
β’
Transferability: Generalizing effectively across datasets within the same domain, MARIOH maintains robustness and adaptability without requiring retraining.
β’
Applications: Improving performance in downstream tasks such as link prediction and node clustering, MARIOH-reconstructed hypergraphs demonstrate the benefits of recovering higher-order structures.
These properties enable MARIOH to scale to large datasets and remain versatile across diverse datasets.
Reproducibility: The code and data used in the paper can be found at https://github.com/KyuhanLee/MARIOH.
Acknowledgements: This work was partly supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. RS-2024-00406985, 30%). This work was partly supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2022-0-00871 / RS-2022-II220871, Development of AI Autonomy and Knowledge Enhancement for AI Agent Collaboration, 30%) (No. RS-2024-00438638, EntireDB2AI: Foundations and Software for Comprehensive Deep Representation Learning and Prediction on Entire Relational Databases, 30%) (No. RS-2019-II190075, Artificial Intelligence Graduate School Program (KAIST), 10%).
References [1] β F. Battiston, G. Cencetti, I. Iacopini, V. Latora, M. Lucas, A. Patania, J.-G. Young, and G. Petri, βNetworks beyond pairwise interactions: Structure and dynamics,β Physics reports, vol. 874, pp. 1β92, 2020. [2] β L. Torres, A. S. Blevins, D. Bassett, and T. Eliassi-Rad, βThe why, how, and when of representations for complex systems,β SIAM Review, vol. 63, no. 3, pp. 435β485, 2021. [3] β A. R. Benson, R. Abebe, M. T. Schaub, A. Jadbabaie, and J. Kleinberg, βSimplicial closure and higher-order link prediction,β Proceedings of the National Academy of Sciences, vol. 115, no. 48, pp. E11β221βE11β230, 2018. [4] β D. Yang, B. Qu, J. Yang, and P. Cudre-Mauroux, βRevisiting user mobility and social relationships in lbsns: a hypergraph embedding approach,β in WWW, 2019. [5] β A. Antelmi, G. Cordasco, C. Spagnuolo, and P. Szufel, βSocial influence maximization in hypergraphs,β Entropy, vol. 23, no. 7, p. 796, 2021. [6] β Y. Wang and J. Kleinberg, βFrom graphs to hypergraphs: Hypergraph projection and its reconstruction,β in ICLR, 2024. [7] β βSupplementary results, code, and datasets,β https://github.com/KyuhanLee/MARIOH, 2025. [8] β S. Fields and O.-k. Song, βA novel genetic system to detect proteinβprotein interactions,β Nature, vol. 340, no. 6230, pp. 245β246, 1989. [9] β A.-C. Gavin, P. Aloy, P. Grandi, R. Krause, M. Boesche, M. Marzioch, C. Rau, L. J. Jensen, S. Bastuck, B. DΓΌmpelfeld et al., βProteome survey reveals modularity of the yeast cell machinery,β Nature, vol. 440, no. 7084, pp. 631β636, 2006. [10] β N. J. Krogan, G. Cagney, H. Yu, G. Zhong, X. Guo, A. Ignatchenko, J. Li, S. Pu, N. Datta, A. P. Tikuisis et al., βGlobal landscape of protein complexes in the yeast saccharomyces cerevisiae,β Nature, vol. 440, no. 7084, pp. 637β643, 2006. [11] β M. D. Resnick, P. S. Bearman, R. W. Blum, K. E. Bauman, K. M. Harris, J. Jones, J. Tabor, T. Beuhring, R. E. Sieving, M. Shew et al., βProtecting adolescents from harm: findings from the national longitudinal study on adolescent health,β Jama, vol. 278, no. 10, pp. 823β832, 1997. [12] β N. Eagle and A. Pentland, βReality mining: sensing complex social systems,β Personal and ubiquitous computing, vol. 10, pp. 255β268, 2006. [13] β J.-G. Young, G. Petri, and T. P. Peixoto, βHypergraph reconstruction from network data,β Communications Physics, vol. 4, no. 1, p. 135, 2021. [14] β L. Xiao, J. Wang, P. H. Kassani, Y. Zhang, Y. Bai, J. M. Stephen, T. W. Wilson, V. D. Calhoun, and Y.-P. Wang, βMulti-hypergraph learning-based brain functional connectivity analysis in fmri data,β IEEE transactions on medical imaging, vol. 39, no. 5, pp. 1746β1758, 2019. [15] β M. E. Newman, βCoauthorship networks and patterns of scientific collaboration,β Proceedings of the national academy of sciences, vol. 101, no. suppl_1, pp. 5200β5205, 2004. [16] β E. SarigΓΆl, R. Pfitzner, I. Scholtes, A. Garas, and F. Schweitzer, βPredicting scientific success based on coauthorship networks,β EPJ Data Science, vol. 3, pp. 1β16, 2014. [17] β J. Leskovec, J. Kleinberg, and C. Faloutsos, βGraphs over time: densification laws, shrinking diameters and possible explanations,β in KDD, 2005. [18] β W. Hu, M. Fey, M. Zitnik, Y. Dong, H. Ren, B. Liu, M. Catasta, and J. Leskovec, βOpen graph benchmark: Datasets for machine learning on graphs,β in NeurIPS, 2020. [19] β D. Zhou, J. Huang, and B. SchΓΆlkopf, βLearning with hypergraphs: Clustering, classification, and embedding,β in NeurIPS, 2006. [20] β Y. Feng, H. You, Z. Zhang, R. Ji, and Y. Gao, βHypergraph neural networks,β in AAAI, 2019. [21] β Y. Dong, W. Sawin, and Y. Bengio, βHnhn: Hypergraph networks with hyperedge neurons,β arXiv preprint arXiv:2006.12278, 2020. [22] β E. Chien, C. Pan, J. Peng, and O. Milenkovic, βYou are allset: A multiset function framework for hypergraph neural networks,β in ICLR, 2022. [23] β J. Huang and J. Yang, βUnignn: a unified framework for graph and hypergraph neural networks,β in IJCAI, 2021. [24] β S.-e. Yoon, H. Song, K. Shin, and Y. Yi, βHow much and when do we need higher-order information in hypergraphs? a case study on hyperedge prediction,β in WWW, 2020. [25] β G. Lee, M. Choe, and K. Shin, βHashnwalk: Hash and random walk based anomaly detection in hyperedge streams,β in IJCAI, 2022. [26] β O. Sporns, βContributions and challenges for network models in cognitive neuroscience,β Nature neuroscience, vol. 17, no. 5, pp. 652β660, 2014. [27] β H. Tang, G. Ma, Y. Zhang, K. Ye, L. Guo, G. Liu, Q. Huang, Y. Wang, O. Ajilore, A. D. Leow et al., βA comprehensive survey of complex brain network representation,β Meta-radiology, vol. 1, no. 3, p. 100046, 2023. [28] β G. Lee, J. Ko, and K. Shin, βHypergraph motifs: concepts, algorithms, and discoveries,β PVLDB, vol. 13, no. 12, pp. 2256β2269, 2020. [29] β Q. F. Lotito, F. Musciotto, A. Montresor, and F. Battiston, βHigher-order motif analysis in hypergraphs,β Communications Physics, vol. 5, no. 1, p. 79, 2022. [30] β T. LaRock and R. Lambiotte, βEncapsulation structure and dynamics in hypergraphs,β Journal of Physics: Complexity, vol. 4, no. 4, p. 045007, 2023. [31] β L. da Fontoura Costa, βFurther generalizations of the jaccard index,β arXiv preprint arXiv:2110.09619, 2021. [32] β I. Amburg, N. Veldt, and A. Benson, βClustering in graphs and hypergraphs with categorical edge labels,β in WWW, 2020. [33] β M. Coscia, G. Rossetti, F. Giannotti, and D. Pedreschi, βDemon: a local-first discovery method for overlapping communities,β in KDD, 2012. [34] β G. Palla, I. DerΓ©nyi, I. Farkas, and T. Vicsek, βUncovering the overlapping community structure of complex networks in nature and society,β nature, vol. 435, no. 7043, pp. 814β818, 2005. [35] β A. Conte, R. Grossi, and A. Marino, βClique covering of large real-world networks,β in SAC, 2016. [36] β C. Bron and J. Kerbosch, βAlgorithm 457: finding all cliques of an undirected graph,β Communications of the ACM, vol. 16, no. 9, pp. 575β577, 1973. [37] β S. Hu, X. Wu, and T. H. Chan, βMaintaining densest subsets efficiently in evolving hypergraphs,β in CIKM, 2017. [38] β G. Lee, M. Choe, and K. Shin, βHow do hyperedges overlap in real-world hypergraphs?-patterns, measures, and generators,β in WWW, 2021. [39] β A. Doan, A. Halevy, and Z. Ives, Principles of data integration. Elsevier, 2012. [40] β M. Lenzerini, βData integration: A theoretical perspective,β in PODS, 2002. [41] β E. Rahm and P. A. Bernstein, βA survey of approaches to automatic schema matching,β The VLDB Journal, vol. 10, pp. 334β350, 2001. [42] β P. Christen and P. Christen, The data matching process. Springer, 2012. [43] β M. Bilenko, B. Kamath, and R. J. Mooney, βAdaptive blocking: Learning to scale up record linkage,β in ICDM, 2006. [44] β A. K. Elmagarmid, P. G. Ipeirotis, and V. S. Verykios, βDuplicate record detection: A survey,β IEEE transactions on knowledge and data engineering, vol. 19, no. 1, pp. 1β16, 2006. [45] β E. Rahm, H. H. Do et al., βData cleaning: Problems and current approaches,β IEEE data engineering bulletin, vol. 23, no. 4, pp. 3β13, 2000. [46] β X. Chu, I. F. Ilyas, S. Krishnan, and J. Wang, βData cleaning: Overview and emerging challenges,β in SIGMOD, 2016. [47] β M. Qiao, H. Zhang, and H. Cheng, βSubgraph matching: on compression and computation,β PVLDB, vol. 11, no. 2, pp. 176β188, 2017. [48] β W. Fan, J. Li, X. Wang, and Y. Wu, βQuery preserving graph compression,β in SIGMOD, 2012. [49] β S. Navlakha, R. Rastogi, and N. Shrivastava, βGraph summarization with bounded error,β in SIGMOD, 2008. [50] β Q. Xu, J. Yang, F. Zhang, Z. Chen, J. Guan, K. Chen, J. Fan, Y. Shen, K. Yang, Y. Zhang et al., βImproving graph compression for efficient resource-constrained graph analytics,β PVLDB, vol. 17, no. 9, pp. 2212β2226, 2024. Report Issue Report Issue for Selection Generated by L A T E xml Instructions for reporting errors
We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:
Click the "Report Issue" button. Open a report feedback form via keyboard, use "Ctrl + ?". Make a text selection and click the "Report Issue for Selection" button near your cursor. You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.
Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.
Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.
Xet Storage Details
- Size:
- 81.4 kB
- Xet hash:
- c01e6093845ebfbf1276b6ac38245e3870f6e9b9c2ee645611e7d410d0760ac0
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.